Pub Date : 2024-10-29DOI: 10.1080/00273171.2024.2418515
Judith J M Rijnhart, Matthew J Valente, David P MacKinnon
Despite previous warnings against the use of the difference-in-coefficients method for estimating the indirect effect when the outcome in the mediation model is binary, the difference-in-coefficients method remains readily used in a variety of fields. The continued use of this method is presumably because of the lack of awareness that this method conflates the indirect effect estimate and non-collapsibility. In this paper, we aim to demonstrate the problems associated with the difference-in-coefficients method for estimating indirect effects for mediation models with binary outcomes. We provide a formula that decomposes the difference-in-coefficients estimate into (1) an estimate of non-collapsibility, and (2) an indirect effect estimate. We use a simulation study and an empirical data example to illustrate the impact of non-collapsibility on the difference-in-coefficients estimate of the indirect effect. Further, we demonstrate the application of several alternative methods for estimating the indirect effect, including the product-of-coefficients method and regression-based causal mediation analysis. The results emphasize the importance of choosing a method for estimating the indirect effect that is not affected by non-collapsibility.
{"title":"Why You Should Not Estimate Mediated Effects Using the Difference-in-Coefficients Method When the Outcome is Binary.","authors":"Judith J M Rijnhart, Matthew J Valente, David P MacKinnon","doi":"10.1080/00273171.2024.2418515","DOIUrl":"https://doi.org/10.1080/00273171.2024.2418515","url":null,"abstract":"<p><p>Despite previous warnings against the use of the difference-in-coefficients method for estimating the indirect effect when the outcome in the mediation model is binary, the difference-in-coefficients method remains readily used in a variety of fields. The continued use of this method is presumably because of the lack of awareness that this method conflates the indirect effect estimate and non-collapsibility. In this paper, we aim to demonstrate the problems associated with the difference-in-coefficients method for estimating indirect effects for mediation models with binary outcomes. We provide a formula that decomposes the difference-in-coefficients estimate into (1) an estimate of non-collapsibility, and (2) an indirect effect estimate. We use a simulation study and an empirical data example to illustrate the impact of non-collapsibility on the difference-in-coefficients estimate of the indirect effect. Further, we demonstrate the application of several alternative methods for estimating the indirect effect, including the product-of-coefficients method and regression-based causal mediation analysis. The results emphasize the importance of choosing a method for estimating the indirect effect that is not affected by non-collapsibility.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-20DOI: 10.1080/00273171.2024.2412682
Erik Sengewald, Katinka Hardt, Marie-Ann Sengewald
Among the most important merits of modern missing data techniques such as multiple imputation (MI) and full-information maximum likelihood estimation is the possibility to include additional information about the missingness process via auxiliary variables. During the past decade, the choice of auxiliary variables has been investigated under a variety of different conditions and more recent research points to the potentially biasing effect of certain auxiliary variables, particularly colliders (Thoemmes & Rose, 2014). In this article, we further extend biasing mechanisms of certain auxiliary variables considered in previous research and thereby focus on their effects on individual diagnosis based on norming, in which the whole distribution of a variable is of interest rather than average coefficients (e.g., means). For this, we first provide the theoretical underpinnings of the mechanisms under study and then provide two focused simulations that (i) directly expand on the collider scenario in Thoemmes and Rose (2014, appendix A) by considering outcomes that are relevant to norming and (ii) extend the scenarios under consideration by instrumental variable mechanisms. We illustrate the bias mechanisms for two different norming approaches and exemplify the procedures by means of an empirical example. We end by discussing limitations and implications of our research.
{"title":"A Causal View on Bias in Missing Data Imputation: The Impact of Evil Auxiliary Variables on Norming of Test Scores.","authors":"Erik Sengewald, Katinka Hardt, Marie-Ann Sengewald","doi":"10.1080/00273171.2024.2412682","DOIUrl":"https://doi.org/10.1080/00273171.2024.2412682","url":null,"abstract":"<p><p>Among the most important merits of modern missing data techniques such as multiple imputation (MI) and full-information maximum likelihood estimation is the possibility to include additional information about the missingness process via auxiliary variables. During the past decade, the choice of auxiliary variables has been investigated under a variety of different conditions and more recent research points to the potentially biasing effect of certain auxiliary variables, particularly colliders (Thoemmes & Rose, 2014). In this article, we further extend biasing mechanisms of certain auxiliary variables considered in previous research and thereby focus on their effects on individual diagnosis based on norming, in which the whole distribution of a variable is of interest rather than average coefficients (e.g., means). For this, we first provide the theoretical underpinnings of the mechanisms under study and then provide two focused simulations that (i) directly expand on the collider scenario in Thoemmes and Rose (2014, appendix A) by considering outcomes that are relevant to norming and (ii) extend the scenarios under consideration by instrumental variable mechanisms. We illustrate the bias mechanisms for two different norming approaches and exemplify the procedures by means of an empirical example. We end by discussing limitations and implications of our research.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1080/00273171.2024.2410760
Justin D Kracht, Niels G Waller
Researchers simulating covariance structure models sometimes add model error to their data to produce model misfit. Presently, the most popular methods for generating error-perturbed data are those by Tucker, Koopman, and Linn (TKL), Cudeck and Browne (CB), and Wu and Browne (WB). Although all of these methods include parameters that control the degree of model misfit, none can generate data that reproduce multiple fit indices. To address this issue, we describe a multiple-target TKL method that can generate error-perturbed data that will reproduce target RMSEA and CFI values either individually or together. To evaluate this method, we simulated error-perturbed correlation matrices for an array of factor analysis models using the multiple-target TKL method, the CB method, and the WB method. Our results indicated that the multiple-target TKL method produced solutions with RMSEA and CFI values that were closer to their target values than those of the alternative methods. Thus, the multiple-target TKL method should be a useful tool for researchers who wish to generate error-perturbed correlation matrices with a known degree of model error. All functions that are described in this work are available in the fungible library. Additional materials (e.g., code, supplemental results) are available at https://osf.io/vxr8d/.
{"title":"Make Some Noise: Generating Data from Imperfect Factor Models.","authors":"Justin D Kracht, Niels G Waller","doi":"10.1080/00273171.2024.2410760","DOIUrl":"https://doi.org/10.1080/00273171.2024.2410760","url":null,"abstract":"<p><p>Researchers simulating covariance structure models sometimes add model error to their data to produce model misfit. Presently, the most popular methods for generating error-perturbed data are those by Tucker, Koopman, and Linn (TKL), Cudeck and Browne (CB), and Wu and Browne (WB). Although all of these methods include parameters that control the degree of model misfit, none can generate data that reproduce multiple fit indices. To address this issue, we describe a multiple-target TKL method that can generate error-perturbed data that will reproduce target RMSEA and CFI values either individually or together. To evaluate this method, we simulated error-perturbed correlation matrices for an array of factor analysis models using the multiple-target TKL method, the CB method, and the WB method. Our results indicated that the multiple-target TKL method produced solutions with RMSEA and CFI values that were closer to their target values than those of the alternative methods. Thus, the multiple-target TKL method should be a useful tool for researchers who wish to generate error-perturbed correlation matrices with a known degree of model error. All functions that are described in this work are available in the fungible <math><mrow><mi>R</mi></mrow></math> library. Additional materials (e.g., <math><mrow><mi>R</mi></mrow></math> code, supplemental results) are available at https://osf.io/vxr8d/.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1080/00273171.2024.2395941
Dingjing Shi, Alexander P Christensen, Eric Anthony Day, Hudson F Golino, Luis Eduardo Garrido
To understand psychological data, it is crucial to examine the structure and dimensions of variables. In this study, we examined alternative estimation algorithms to the conventional GLASSO-based exploratory graph analysis (EGA) in network psychometric models to assess the dimensionality structure of the data. The study applied Bayesian conjugate or Jeffreys' priors to estimate the graphical structure and then used the Louvain community detection algorithm to partition and identify groups of nodes, which allowed the detection of the multi- and unidimensional factor structures. Monte Carlo simulations suggested that the two alternative Bayesian estimation algorithms had comparable or better performance when compared with the GLASSO-based EGA and conventional parallel analysis (PA). When estimating the multidimensional factor structure, the analytically based method (i.e., EGA.analytical) showed the best balance between accuracy and mean biased/absolute errors, with the highest accuracy tied with EGA but with the smallest errors. The sampling-based approach (EGA.sampling) yielded higher accuracy and smaller errors than PA; lower accuracy but also lower errors than EGA. Techniques from the two algorithms had more stable performance than EGA and PA across different data conditions. When estimating the unidimensional structure, the PA technique performed the best, followed closely by EGA, and then EGA.analytical and EGA.sampling. Furthermore, the study explored four full Bayesian techniques to assess dimensionality in network psychometrics. The results demonstrated superior performance when using Bayesian hypothesis testing or deriving posterior samples of graph structures under small sample sizes. The study recommends using the EGA.analytical technique as an alternative tool for assessing dimensionality and advocates for the usefulness of the EGA.sampling method as a valuable alternate technique. The findings also indicated encouraging results for extending the regularization-based network modeling EGA method to the Bayesian framework and discussed future directions in this line of work. The study illustrated the practical application of the techniques to two empirical examples in R.
要理解心理数据,研究变量的结构和维度至关重要。在本研究中,我们研究了网络心理测量模型中基于传统 GLASSO 的探索性图分析(EGA)的替代估计算法,以评估数据的维度结构。研究采用贝叶斯共轭或杰弗里斯先验来估计图结构,然后使用卢万群落检测算法来划分和识别节点群,从而检测出多维和单维因子结构。蒙特卡罗模拟表明,与基于 GLASSO 的 EGA 和传统的并行分析(PA)相比,这两种贝叶斯估计算法的性能相当或更好。在估计多维因子结构时,基于分析的方法(即 EGA.analytical)在准确性和平均偏差/绝对误差之间表现出最佳平衡,准确性与 EGA 并列最高,但误差最小。与 PA 相比,基于采样的方法(EGA.采样)精度更高,误差更小;与 EGA 相比,精度较低,但误差也较小。在不同的数据条件下,这两种算法的技术比 EGA 和 PA 具有更稳定的性能。在估计单维结构时,PA 技术表现最好,紧随其后的是 EGA,然后是 EGA.分析和 EGA.采样。此外,研究还探索了四种完整的贝叶斯技术,以评估网络心理测量学中的维度。结果表明,在样本量较小的情况下,使用贝叶斯假设检验或推导图结构的后验样本时,效果更佳。研究建议使用 EGA.分析技术作为评估维度的替代工具,并主张将 EGA.抽样方法作为一种有价值的替代技术。研究结果还表明,将基于正则化的网络建模 EGA 方法扩展到贝叶斯框架取得了令人鼓舞的成果,并讨论了这一工作领域的未来方向。该研究以 R 语言中的两个经验实例说明了这些技术的实际应用。
{"title":"Exploring Estimation Procedures for Reducing Dimensionality in Psychological Network Modeling.","authors":"Dingjing Shi, Alexander P Christensen, Eric Anthony Day, Hudson F Golino, Luis Eduardo Garrido","doi":"10.1080/00273171.2024.2395941","DOIUrl":"https://doi.org/10.1080/00273171.2024.2395941","url":null,"abstract":"<p><p>To understand psychological data, it is crucial to examine the structure and dimensions of variables. In this study, we examined alternative estimation algorithms to the conventional GLASSO-based exploratory graph analysis (EGA) in network psychometric models to assess the dimensionality structure of the data. The study applied Bayesian conjugate or Jeffreys' priors to estimate the graphical structure and then used the Louvain community detection algorithm to partition and identify groups of nodes, which allowed the detection of the multi- and unidimensional factor structures. Monte Carlo simulations suggested that the two alternative Bayesian estimation algorithms had comparable or better performance when compared with the GLASSO-based EGA and conventional parallel analysis (PA). When estimating the multidimensional factor structure, the analytically based method (i.e., EGA.analytical) showed the best balance between accuracy and mean biased/absolute errors, with the highest accuracy tied with EGA but with the smallest errors. The sampling-based approach (EGA.sampling) yielded higher accuracy and smaller errors than PA; lower accuracy but also lower errors than EGA. Techniques from the two algorithms had more stable performance than EGA and PA across different data conditions. When estimating the unidimensional structure, the PA technique performed the best, followed closely by EGA, and then EGA.analytical and EGA.sampling. Furthermore, the study explored four full Bayesian techniques to assess dimensionality in network psychometrics. The results demonstrated superior performance when using Bayesian hypothesis testing or deriving posterior samples of graph structures under small sample sizes. The study recommends using the EGA.analytical technique as an alternative tool for assessing dimensionality and advocates for the usefulness of the EGA.sampling method as a valuable alternate technique. The findings also indicated encouraging results for extending the regularization-based network modeling EGA method to the Bayesian framework and discussed future directions in this line of work. The study illustrated the practical application of the techniques to two empirical examples in R.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1080/00273171.2024.2396148
David Thissen
The concept of factorial invariance has evolved since it originated in the 1930s as a criterion for the usefulness of the multiple factor model; it has become a form of analysis supporting the validity of inferences about group differences on underlying latent variables. The analysis of differential item functioning (DIF) arose in the literature of item response theory (IRT), where its original purpose was the detection and removal of test items that are differentially difficult for, or biased against, one subpopulation or another. The two traditions merge at the level of the underlying latent variable model, but their separate origins and different purposes have led them to differ in details of terminology and procedure. This review traces some aspects of the histories of the two traditions, ultimately drawing some conclusions about how analysts may draw on elements of both, and how the nature of the research question determines the procedures used. Whether statistical tests are grouped by parameter (as in studies of factorial invariance) or across parameters by variable (as in DIF analysis) depends on the context and is independent of the model, as are subtle aspects of the order of the tests. In any case in which DIF or partial invariance is a possibility, the invariant parameters, or anchor items in DIF analysis, are best selected in an interplay between the statistics and judgment about what is being measured.
{"title":"A Review of Some of the History of Factorial Invariance and Differential Item Functioning.","authors":"David Thissen","doi":"10.1080/00273171.2024.2396148","DOIUrl":"https://doi.org/10.1080/00273171.2024.2396148","url":null,"abstract":"The concept of factorial invariance has evolved since it originated in the 1930s as a criterion for the usefulness of the multiple factor model; it has become a form of analysis supporting the validity of inferences about group differences on underlying latent variables. The analysis of differential item functioning (DIF) arose in the literature of item response theory (IRT), where its original purpose was the detection and removal of test items that are differentially difficult for, or biased against, one subpopulation or another. The two traditions merge at the level of the underlying latent variable model, but their separate origins and different purposes have led them to differ in details of terminology and procedure. This review traces some aspects of the histories of the two traditions, ultimately drawing some conclusions about how analysts may draw on elements of both, and how the nature of the research question determines the procedures used. Whether statistical tests are grouped by parameter (as in studies of factorial invariance) or across parameters by variable (as in DIF analysis) depends on the context and is independent of the model, as are subtle aspects of the order of the tests. In any case in which DIF or partial invariance is a possibility, the invariant parameters, or anchor items in DIF analysis, are best selected in an interplay between the statistics and judgment about what is being measured.","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-08DOI: 10.1080/00273171.2024.2342324
Urbano Lorenzo-Seva, Pere J Ferrando
In unrestricted or exploratory factor analysis (EFA), there is a wide range of recommendations about the size samples should be to attain correct and stable solutions. In general, however, these recommendations are either rules of thumb or based on simulation results. As it is hard to establish the extent to which a particular data set suits the conditions used in a simulation study, the advice produced by simulation studies is not direct enough to be of practical use. Instead of trying to provide general and complex recommendations, in this article, we propose to estimate the sample size that is needed to analyze a data set at hand. The estimation takes into account the specified EFA model. The proposal is based on an intensive simulation process in which the sample correlation matrix is used as a basis for generating data sets from a pseudo-population in which the parent correlation holds exactly, and the criterion for determining the size required is a threshold that quantifies the closeness between the pseudo-population and the sample reproduced correlation matrices. The simulation results suggest that the proposal works well and that the determinants identified agree with those in the literature.
{"title":"Determining Sample Size Requirements in EFA Solutions: A Simple Empirical Proposal.","authors":"Urbano Lorenzo-Seva, Pere J Ferrando","doi":"10.1080/00273171.2024.2342324","DOIUrl":"10.1080/00273171.2024.2342324","url":null,"abstract":"<p><p>In unrestricted or exploratory factor analysis (EFA), there is a wide range of recommendations about the size samples should be to attain correct and stable solutions. In general, however, these recommendations are either rules of thumb or based on simulation results. As it is hard to establish the extent to which a particular data set suits the conditions used in a simulation study, the advice produced by simulation studies is not direct enough to be of practical use. Instead of trying to provide general and complex recommendations, in this article, we propose to estimate the sample size that is needed to analyze a data set at hand. The estimation takes into account the specified EFA model. The proposal is based on an intensive simulation process in which the sample correlation matrix is used as a basis for generating data sets from a pseudo-population in which the parent correlation holds exactly, and the criterion for determining the size required is a threshold that quantifies the closeness between the pseudo-population and the sample reproduced correlation matrices. The simulation results suggest that the proposal works well and that the determinants identified agree with those in the literature.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-07-12DOI: 10.1080/00273171.2024.2372635
Emanuela Furfaro, Fushing Hsieh, Maureen R Weiss, Emilio Ferrer
We implement an analytic approach for ordinal measures and we use it to investigate the structure and the changes over time of self-worth in a sample of adolescents students in high school. We represent the variations in self-worth and its various sub-domains using entropy-based measures that capture the observed uncertainty. We then study the evolution of the entropy across four time points throughout a semester of high school. Our analytic approach yields information about the configuration of the various dimensions of the self together with time-related changes and associations among these dimensions. We represent the results using a network that depicts self-worth changes over time. This approach also identifies groups of adolescent students who show different patterns of associations, thus emphasizing the need to consider heterogeneity in the data.
{"title":"Using Conditional Entropy Networks of Ordinal Measures to Examine Changes in Self-Worth Among Adolescent Students in High School.","authors":"Emanuela Furfaro, Fushing Hsieh, Maureen R Weiss, Emilio Ferrer","doi":"10.1080/00273171.2024.2372635","DOIUrl":"10.1080/00273171.2024.2372635","url":null,"abstract":"<p><p>We implement an analytic approach for ordinal measures and we use it to investigate the structure and the changes over time of self-worth in a sample of adolescents students in high school. We represent the variations in self-worth and its various sub-domains using entropy-based measures that capture the observed uncertainty. We then study the evolution of the entropy across four time points throughout a semester of high school. Our analytic approach yields information about the configuration of the various dimensions of the self together with time-related changes and associations among these dimensions. We represent the results using a network that depicts self-worth changes over time. This approach also identifies groups of adolescent students who show different patterns of associations, thus emphasizing the need to consider heterogeneity in the data.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-23DOI: 10.1080/00273171.2024.2354233
Kilian Hasselhorn, Charlotte Ottenstein, Thorsten Meiser, Tanja Lischetzke
Ambulatory assessment (AA) is becoming an increasingly popular research method in the fields of psychology and life science. Nevertheless, knowledge about the effects that design choices, such as questionnaire length (i.e., number of items per questionnaire), have on AA data quality is still surprisingly restricted. Additionally, response styles (RS), which threaten data quality, have hardly been analyzed in the context of AA. The aim of the current research was to experimentally manipulate questionnaire length and investigate the association between questionnaire length and RS in an AA study. We expected that the group with the longer (82-item) questionnaire would show greater reliance on RS relative to the substantive traits than the group with the shorter (33-item) questionnaire. Students (n = 284) received questionnaires three times a day for 14 days. We used a multigroup two-dimensional item response tree model in a multilevel structural equation modeling framework to estimate midpoint and extreme RS in our AA study. We found that the long questionnaire group showed a greater reliance on RS relative to trait-based processes than the short questionnaire group. Although further validation of our findings is necessary, we hope that researchers consider our findings when planning an AA study in the future.
在心理学和生命科学领域,非卧床评估(AA)正日益成为一种流行的研究方法。然而,有关问卷长度(即每份问卷的项目数)等设计选择对非卧床评估数据质量的影响的知识仍然非常有限。此外,威胁数据质量的应答方式(RS)也几乎没有在 AA 的背景下进行过分析。当前研究的目的是在一项 AA 研究中,通过实验操纵问卷长度,并调查问卷长度与 RS 之间的关联。我们预计,与问卷较短(33 个条目)的小组相比,问卷较长(82 个条目)的小组将表现出更多的对 RS 的依赖。学生(n = 284)在 14 天内每天接受三次问卷调查。在 AA 研究中,我们在多层次结构方程建模框架下使用了多组二维项目反应树模型来估计中点和极端 RS。我们发现,相对于基于特质的过程,长问卷组比短问卷组更依赖于 RS。尽管我们的研究结果还需要进一步验证,但我们希望研究人员今后在计划 AA 研究时能考虑到我们的研究结果。
{"title":"The Effects of Questionnaire Length on the Relative Impact of Response Styles in Ambulatory Assessment.","authors":"Kilian Hasselhorn, Charlotte Ottenstein, Thorsten Meiser, Tanja Lischetzke","doi":"10.1080/00273171.2024.2354233","DOIUrl":"10.1080/00273171.2024.2354233","url":null,"abstract":"<p><p>Ambulatory assessment (AA) is becoming an increasingly popular research method in the fields of psychology and life science. Nevertheless, knowledge about the effects that design choices, such as questionnaire length (i.e., number of items per questionnaire), have on AA data quality is still surprisingly restricted. Additionally, response styles (RS), which threaten data quality, have hardly been analyzed in the context of AA. The aim of the current research was to experimentally manipulate questionnaire length and investigate the association between questionnaire length and RS in an AA study. We expected that the group with the longer (82-item) questionnaire would show greater reliance on RS relative to the substantive traits than the group with the shorter (33-item) questionnaire. Students (<i>n</i> = 284) received questionnaires three times a day for 14 days. We used a multigroup two-dimensional item response tree model in a multilevel structural equation modeling framework to estimate midpoint and extreme RS in our AA study. We found that the long questionnaire group showed a greater reliance on RS relative to trait-based processes than the short questionnaire group. Although further validation of our findings is necessary, we hope that researchers consider our findings when planning an AA study in the future.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-31DOI: 10.1080/00273171.2024.2347959
Young Won Cho, Sy-Miin Chow, Christina M Marini, Lynn M Martire
Continuous-time modeling using differential equations is a promising technique to model change processes with longitudinal data. Among ways to fit this model, the Latent Differential Structural Equation Modeling (LDSEM) approach defines latent derivative variables within a structural equation modeling (SEM) framework, thereby allowing researchers to leverage advantages of the SEM framework for model building, estimation, inference, and comparison purposes. Still, a few issues remain unresolved, including performance of multilevel variations of the LDSEM under short time lengths (e.g., 14 time points), particularly when coupled multivariate processes and time-varying covariates are involved. Additionally, the possibility of using Bayesian estimation to facilitate the estimation of multilevel LDSEM (M-LDSEM) models with complex and higher-dimensional random effect structures has not been investigated. We present a series of Monte Carlo simulations to evaluate three possible approaches to fitting M-LDSEM, including: frequentist single-level and two-level robust estimators and Bayesian two-level estimator. Our findings suggested that the Bayesian approach outperformed other frequentist approaches. The effects of time-varying covariates are well recovered, and coupling parameters are the least biased especially using higher-order derivative information with the Bayesian estimator. Finally, an empirical example is provided to show the applicability of the approach.
使用微分方程进行连续时间建模是一种很有前途的技术,可用于对纵向数据的变化过程进行建模。在拟合这种模型的方法中,潜在微分结构方程建模(LDSEM)方法在结构方程建模(SEM)框架内定义了潜在的衍生变量,从而使研究人员能够利用 SEM 框架的优势来建立模型、进行估计、推理和比较。但仍有一些问题尚未解决,包括 LDSEM 的多层次变化在较短时间长度(如 14 个时间点)下的表现,尤其是在涉及耦合多变量过程和时变协变量时。此外,使用贝叶斯估计法来促进具有复杂和高维随机效应结构的多层次 LDSEM(M-LDSEM)模型估计的可能性尚未得到研究。我们进行了一系列蒙特卡罗模拟,评估了拟合 M-LDSEM 的三种可能方法,包括:频数主义单水平和双水平稳健估计法以及贝叶斯双水平估计法。我们的研究结果表明,贝叶斯方法优于其他频数法。时变协变量的影响得到了很好的恢复,耦合参数的偏差最小,特别是使用贝叶斯估计器的高阶导数信息。最后,我们提供了一个实证例子来说明该方法的适用性。
{"title":"Multilevel Latent Differential Structural Equation Model with Short Time Series and Time-Varying Covariates: A Comparison of Frequentist and Bayesian Estimators.","authors":"Young Won Cho, Sy-Miin Chow, Christina M Marini, Lynn M Martire","doi":"10.1080/00273171.2024.2347959","DOIUrl":"10.1080/00273171.2024.2347959","url":null,"abstract":"<p><p>Continuous-time modeling using differential equations is a promising technique to model change processes with longitudinal data. Among ways to fit this model, the Latent Differential Structural Equation Modeling (LDSEM) approach defines latent derivative variables within a structural equation modeling (SEM) framework, thereby allowing researchers to leverage advantages of the SEM framework for model building, estimation, inference, and comparison purposes. Still, a few issues remain unresolved, including performance of multilevel variations of the LDSEM under short time lengths (e.g., 14 time points), particularly when coupled multivariate processes and time-varying covariates are involved. Additionally, the possibility of using Bayesian estimation to facilitate the estimation of multilevel LDSEM (M-LDSEM) models with complex and higher-dimensional random effect structures has not been investigated. We present a series of Monte Carlo simulations to evaluate three possible approaches to fitting M-LDSEM, including: frequentist single-level and two-level robust estimators and Bayesian two-level estimator. Our findings suggested that the Bayesian approach outperformed other frequentist approaches. The effects of time-varying covariates are well recovered, and coupling parameters are the least biased especially using higher-order derivative information with the Bayesian estimator. Finally, an empirical example is provided to show the applicability of the approach.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11424268/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141184869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-11DOI: 10.1080/00273171.2024.2345915
Nikola Sekulovski, Sara Keetelaar, Karoline Huth, Eric-Jan Wagenmakers, Riet van Bork, Don van den Bergh, Maarten Marsman
Network psychometrics uses graphical models to assess the network structure of psychological variables. An important task in their analysis is determining which variables are unrelated in the network, i.e., are independent given the rest of the network variables. This conditional independence structure is a gateway to understanding the causal structure underlying psychological processes. Thus, it is crucial to have an appropriate method for evaluating conditional independence and dependence hypotheses. Bayesian approaches to testing such hypotheses allow researchers to differentiate between absence of evidence and evidence of absence of connections (edges) between pairs of variables in a network. Three Bayesian approaches to assessing conditional independence have been proposed in the network psychometrics literature. We believe that their theoretical foundations are not widely known, and therefore we provide a conceptual review of the proposed methods and highlight their strengths and limitations through a simulation study. We also illustrate the methods using an empirical example with data on Dark Triad Personality. Finally, we provide recommendations on how to choose the optimal method and discuss the current gaps in the literature on this important topic.
{"title":"Testing Conditional Independence in Psychometric Networks: An Analysis of Three Bayesian Methods.","authors":"Nikola Sekulovski, Sara Keetelaar, Karoline Huth, Eric-Jan Wagenmakers, Riet van Bork, Don van den Bergh, Maarten Marsman","doi":"10.1080/00273171.2024.2345915","DOIUrl":"10.1080/00273171.2024.2345915","url":null,"abstract":"<p><p>Network psychometrics uses graphical models to assess the network structure of psychological variables. An important task in their analysis is determining which variables are unrelated in the network, i.e., are independent given the rest of the network variables. This conditional independence structure is a gateway to understanding the causal structure underlying psychological processes. Thus, it is crucial to have an appropriate method for evaluating conditional independence and dependence hypotheses. Bayesian approaches to testing such hypotheses allow researchers to differentiate between absence of evidence and evidence of absence of connections (edges) between pairs of variables in a network. Three Bayesian approaches to assessing conditional independence have been proposed in the network psychometrics literature. We believe that their theoretical foundations are not widely known, and therefore we provide a conceptual review of the proposed methods and highlight their strengths and limitations through a simulation study. We also illustrate the methods using an empirical example with data on Dark Triad Personality. Finally, we provide recommendations on how to choose the optimal method and discuss the current gaps in the literature on this important topic.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140908878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}