The network approach to psychopathology, which assesses associations between individual symptoms, has recently been applied to evaluate treatments for mental disorders. While various options for conducting network analyses in intervention research exist, an overview and an evaluation of the various approaches are currently missing. Therefore, we conducted a review on network analyses in intervention research. Studies were included if they constructed a symptom network, analyzed data that were collected before, during or after treatment of a mental disorder, and yielded information about the treatment effect. The 56 included studies were reviewed regarding their methodological and analytic strategies. About half of the studies based on data from randomized trials conducted a network intervention analysis, while the other half compared networks between treatment groups. The majority of studies estimated cross-sectional networks, even when repeated measures were available. All but five studies investigated networks on the group level. This review highlights that current methodological practices limit the information that can be gained through network analyses in intervention research. We discuss the strength and limitations of certain methodological and analytic strategies and propose that further work is needed to use the full potential of the network approach in intervention research.
{"title":"Methodological and Statistical Practices of Using Symptom Networks to Evaluate Mental Health Interventions: A Review and Reflections.","authors":"Lea Schumacher, Julian Burger, Jette Echterhoff, Levente Kriston","doi":"10.1080/00273171.2024.2335401","DOIUrl":"10.1080/00273171.2024.2335401","url":null,"abstract":"<p><p>The network approach to psychopathology, which assesses associations between individual symptoms, has recently been applied to evaluate treatments for mental disorders. While various options for conducting network analyses in intervention research exist, an overview and an evaluation of the various approaches are currently missing. Therefore, we conducted a review on network analyses in intervention research. Studies were included if they constructed a symptom network, analyzed data that were collected before, during or after treatment of a mental disorder, and yielded information about the treatment effect. The 56 included studies were reviewed regarding their methodological and analytic strategies. About half of the studies based on data from randomized trials conducted a network intervention analysis, while the other half compared networks between treatment groups. The majority of studies estimated cross-sectional networks, even when repeated measures were available. All but five studies investigated networks on the group level. This review highlights that current methodological practices limit the information that can be gained through network analyses in intervention research. We discuss the strength and limitations of certain methodological and analytic strategies and propose that further work is needed to use the full potential of the network approach in intervention research.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140908804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-11DOI: 10.1080/00273171.2024.2337340
Xynthia Kavelaars, Joris Mulder, Maurits Kaptein
The effects of treatments may differ between persons with different characteristics. Addressing such treatment heterogeneity is crucial to investigate whether patients with specific characteristics are likely to benefit from a new treatment. The current paper presents a novel Bayesian method for superiority decision-making in the context of randomized controlled trials with multivariate binary responses and heterogeneous treatment effects. The framework is based on three elements: a) Bayesian multivariate logistic regression analysis with a Pólya-Gamma expansion; b) a transformation procedure to transfer obtained regression coefficients to a more intuitive multivariate probability scale (i.e., success probabilities and the differences between them); and c) a compatible decision procedure for treatment comparison with prespecified decision error rates. Procedures for a priori sample size estimation under a non-informative prior distribution are included. A numerical evaluation demonstrated that decisions based on a priori sample size estimation resulted in anticipated error rates among the trial population as well as subpopulations. Further, average and conditional treatment effect parameters could be estimated unbiasedly when the sample was large enough. Illustration with the International Stroke Trial dataset revealed a trend toward heterogeneous effects among stroke patients: Something that would have remained undetected when analyses were limited to average treatment effects.
{"title":"Bayesian Multivariate Logistic Regression for Superiority and Inferiority Decision-Making under Observable Treatment Heterogeneity.","authors":"Xynthia Kavelaars, Joris Mulder, Maurits Kaptein","doi":"10.1080/00273171.2024.2337340","DOIUrl":"10.1080/00273171.2024.2337340","url":null,"abstract":"<p><p>The effects of treatments may differ between persons with different characteristics. Addressing such treatment heterogeneity is crucial to investigate whether patients with specific characteristics are likely to benefit from a new treatment. The current paper presents a novel Bayesian method for superiority decision-making in the context of randomized controlled trials with multivariate binary responses and heterogeneous treatment effects. The framework is based on three elements: a) Bayesian multivariate logistic regression analysis with a Pólya-Gamma expansion; b) a transformation procedure to transfer obtained regression coefficients to a more intuitive multivariate probability scale (i.e., success probabilities and the differences between them); and c) a compatible decision procedure for treatment comparison with prespecified decision error rates. Procedures for a priori sample size estimation under a non-informative prior distribution are included. A numerical evaluation demonstrated that decisions based on a priori sample size estimation resulted in anticipated error rates among the trial population as well as subpopulations. Further, average and conditional treatment effect parameters could be estimated unbiasedly when the sample was large enough. Illustration with the International Stroke Trial dataset revealed a trend toward heterogeneous effects among stroke patients: Something that would have remained undetected when analyses were limited to average treatment effects.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11548885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140908832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-19DOI: 10.1080/00273171.2024.2310426
Youngjin Han, Yang Liu, Ji Seung Yang
{"title":"Information Matrix Test for Item Response Models Using Stochastic Approximation.","authors":"Youngjin Han, Yang Liu, Ji Seung Yang","doi":"10.1080/00273171.2024.2310426","DOIUrl":"10.1080/00273171.2024.2310426","url":null,"abstract":"","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139900890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-20DOI: 10.1080/00273171.2024.2307529
Xiao Liu
Propensity score (PS) analyses are increasingly popular in behavioral sciences. Two issues often add complexities to PS analyses, including missing data in observed covariates and clustered data structure. In previous research, methods for conducting PS analyses with considering either issue alone were examined. In practice, the two issues often co-occur; but the performance of methods for PS analyses in the presence of both issues has not been evaluated previously. In this study, we consider PS weighting analysis when data are clustered and observed covariates have missing values. A simulation study is conducted to evaluate the performance of different missing data handling methods (complete-case, single-level imputation, or multilevel imputation) combined with different multilevel PS weighting methods (fixed- or random-effects PS models, inverse-propensity-weighting or the clustered weighting, weighted single-level or multilevel outcome models). The results suggest that the bias in average treatment effect estimation can be reduced, by better accounting for clustering in both the missing data handling stage (such as with the multilevel imputation) and the PS analysis stage (such as with the fixed-effects PS model, clustered weighting, and weighted multilevel outcome model). A real-data example is provided for illustration.
{"title":"Propensity Score Weighting with Missing Data on Covariates and Clustered Data Structure.","authors":"Xiao Liu","doi":"10.1080/00273171.2024.2307529","DOIUrl":"10.1080/00273171.2024.2307529","url":null,"abstract":"<p><p>Propensity score (PS) analyses are increasingly popular in behavioral sciences. Two issues often add complexities to PS analyses, including missing data in observed covariates and clustered data structure. In previous research, methods for conducting PS analyses with considering either issue alone were examined. In practice, the two issues often co-occur; but the performance of methods for PS analyses in the presence of both issues has not been evaluated previously. In this study, we consider PS weighting analysis when data are clustered and observed covariates have missing values. A simulation study is conducted to evaluate the performance of different missing data handling methods (complete-case, single-level imputation, or multilevel imputation) combined with different multilevel PS weighting methods (fixed- or random-effects PS models, inverse-propensity-weighting or the clustered weighting, weighted single-level or multilevel outcome models). The results suggest that the bias in average treatment effect estimation can be reduced, by better accounting for clustering in both the missing data handling stage (such as with the multilevel imputation) and the PS analysis stage (such as with the fixed-effects PS model, clustered weighting, and weighted multilevel outcome model). A real-data example is provided for illustration.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-13DOI: 10.1080/00273171.2023.2289058
Jonathan J Park, Sy-Miin Chow, Sacha Epskamp, Peter C M Molenaar
Recent years have seen the emergence of an "idio-thetic" class of methods to bridge the gap between nomothetic and idiographic inference. These methods describe nomothetic trends in idiographic processes by pooling intraindividual information across individuals to inform group-level inference or vice versa. The current work introduces a novel "idio-thetic" model: the subgrouped chain graphical vector autoregression (scGVAR). The scGVAR is unique in its ability to identify subgroups of individuals who share common dynamic network structures in both lag(1) and contemporaneous effects. Results from Monte Carlo simulations indicate that the scGVAR shows promise over similar approaches when clusters of individuals differ in their contemporaneous dynamics and in showing increased sensitivity in detecting nuanced group differences while keeping Type-I error rates low. In contrast, a competing approach-the Alternating Least Squares VAR (ALS VAR) performs well when groups were separated by larger distances. Further considerations are provided regarding applications of the ALS VAR and scGVAR on real data and the strengths and limitations of both methods.
近年来,出现了一类 "特异推理 "方法,以弥补提名推理和特异推理之间的差距。这些方法通过汇集跨个体的个体内信息来为群体层面的推断提供信息,反之亦然,从而描述特异过程中的提名趋势。目前的工作引入了一种新颖的 "特异性 "模型:分组链图向量自回归(scGVAR)。scGVAR 的独特之处在于它能够识别在滞后效应(1)和同期效应中具有共同动态网络结构的个体子群。蒙特卡洛模拟结果表明,当个体集群的同期动态存在差异时,scGVAR 有望超越类似方法,并在检测细微群体差异方面显示出更高的灵敏度,同时保持较低的类型一误差率。相比之下,一种与之竞争的方法--交替最小二乘法 VAR(ALS VAR)--在组间距离较大的情况下表现良好。本文还就 ALS VAR 和 scGVAR 在实际数据中的应用以及这两种方法的优势和局限性做了进一步的探讨。
{"title":"Subgrouping with Chain Graphical VAR Models.","authors":"Jonathan J Park, Sy-Miin Chow, Sacha Epskamp, Peter C M Molenaar","doi":"10.1080/00273171.2023.2289058","DOIUrl":"10.1080/00273171.2023.2289058","url":null,"abstract":"<p><p>Recent years have seen the emergence of an \"idio-thetic\" class of methods to bridge the gap between nomothetic and idiographic inference. These methods describe nomothetic trends in idiographic processes by pooling intraindividual information across individuals to inform group-level inference or vice versa. The current work introduces a novel \"idio-thetic\" model: the subgrouped chain graphical vector autoregression (scGVAR). The scGVAR is unique in its ability to identify subgroups of individuals who share common dynamic network structures in both lag(1) and contemporaneous effects. Results from Monte Carlo simulations indicate that the scGVAR shows promise over similar approaches when clusters of individuals differ in their contemporaneous dynamics and in showing increased sensitivity in detecting nuanced group differences while keeping Type-I error rates low. In contrast, a competing approach-the Alternating Least Squares VAR (ALS VAR) performs well when groups were separated by larger distances. Further considerations are provided regarding applications of the ALS VAR and scGVAR on real data and the strengths and limitations of both methods.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11187704/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139731017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-23DOI: 10.1080/00273171.2024.2310418
Sophia J Lamp, David P MacKinnon
{"title":"Correcting Regression Coefficients for Collider Bias in Psychological Research.","authors":"Sophia J Lamp, David P MacKinnon","doi":"10.1080/00273171.2024.2310418","DOIUrl":"10.1080/00273171.2024.2310418","url":null,"abstract":"","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11187666/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139934145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-13DOI: 10.1080/00273171.2023.2288577
Marie Beisemann, Boris Forthmann, Philipp Doebler
In psychology and education, tests (e.g., reading tests) and self-reports (e.g., clinical questionnaires) generate counts, but corresponding Item Response Theory (IRT) methods are underdeveloped compared to binary data. Recent advances include the Two-Parameter Conway-Maxwell-Poisson model (2PCMPM), generalizing Rasch's Poisson Counts Model, with item-specific difficulty, discrimination, and dispersion parameters. Explaining differences in model parameters informs item construction and selection but has received little attention. We introduce two 2PCMPM-based explanatory count IRT models: The Distributional Regression Test Model for item covariates, and the Count Latent Regression Model for (categorical) person covariates. Estimation methods are provided and satisfactory statistical properties are observed in simulations. Two examples illustrate how the models help understand tests and underlying constructs.
{"title":"Understanding Ability and Reliability Differences Measured with Count Items: The Distributional Regression Test Model and the Count Latent Regression Model.","authors":"Marie Beisemann, Boris Forthmann, Philipp Doebler","doi":"10.1080/00273171.2023.2288577","DOIUrl":"10.1080/00273171.2023.2288577","url":null,"abstract":"<p><p>In psychology and education, tests (e.g., reading tests) and self-reports (e.g., clinical questionnaires) generate counts, but corresponding Item Response Theory (IRT) methods are underdeveloped compared to binary data. Recent advances include the Two-Parameter Conway-Maxwell-Poisson model (2PCMPM), generalizing Rasch's Poisson Counts Model, with item-specific difficulty, discrimination, and dispersion parameters. Explaining differences in model parameters informs item construction and selection but has received little attention. We introduce two 2PCMPM-based explanatory count IRT models: The Distributional Regression Test Model for item covariates, and the Count Latent Regression Model for (categorical) person covariates. Estimation methods are provided and satisfactory statistical properties are observed in simulations. Two examples illustrate how the models help understand tests and underlying constructs.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-14DOI: 10.1080/00273171.2023.2288575
Lydia Gabriela Speyer, Aja Louise Murray, Rogier Kievit
Recent technological advances have provided new opportunities for the collection of intensive longitudinal data. Using methods such as dynamic structural equation modeling, these data can provide new insights into moment-to-moment dynamics of psychological and behavioral processes. In intensive longitudinal data (t > 20), researchers often have theories that imply that factors that change from moment to moment within individuals act as moderators. For instance, a person's level of sleep deprivation may affect how much an external stressor affects mood. Here, we describe how researchers can implement, test, and interpret dynamically changing within-person moderation effects using two-level dynamic structural equation modeling as implemented in the structural equation modeling software Mplus. We illustrate the analysis of within-person moderation effects using an empirical example investigating whether changes in spending time online using social media affect the moment-to-moment effect of loneliness on depressive symptoms, and highlight avenues for future methodological development. We provide annotated Mplus code, enabling researchers to better isolate, estimate, and interpret the complexities of within-person interaction effects.
{"title":"Investigating Moderation Effects at the Within-Person Level Using Intensive Longitudinal Data: A Two-Level Dynamic Structural Equation Modelling Approach in Mplus.","authors":"Lydia Gabriela Speyer, Aja Louise Murray, Rogier Kievit","doi":"10.1080/00273171.2023.2288575","DOIUrl":"10.1080/00273171.2023.2288575","url":null,"abstract":"<p><p>Recent technological advances have provided new opportunities for the collection of intensive longitudinal data. Using methods such as dynamic structural equation modeling, these data can provide new insights into moment-to-moment dynamics of psychological and behavioral processes. In intensive longitudinal data (<i>t</i> > 20), researchers often have theories that imply that factors that change from moment to moment within individuals act as moderators. For instance, a person's level of sleep deprivation may affect how much an external stressor affects mood. Here, we describe how researchers can implement, test, and interpret dynamically changing within-person moderation effects using two-level dynamic structural equation modeling as implemented in the structural equation modeling software Mplus. We illustrate the analysis of within-person moderation effects using an empirical example investigating whether changes in spending time online using social media affect the moment-to-moment effect of loneliness on depressive symptoms, and highlight avenues for future methodological development. We provide annotated Mplus code, enabling researchers to better isolate, estimate, and interpret the complexities of within-person interaction effects.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139736687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-13DOI: 10.1080/00273171.2023.2288589
Sijia Huang
Student evaluation of teaching (SET) questionnaires are ubiquitously applied in higher education institutions in North America for both formative and summative purposes. Data collected from SET questionnaires are usually item-level data with cross-classified structure, which are characterized by multivariate categorical outcomes (i.e., multiple Likert-type items in the questionnaires) and cross-classified structure (i.e., non-nested students and instructors). Recently, a new approach, namely the cross-classified IRT model, was proposed for appropriately handling SET data. To inform researchers in higher education, in this article, the cross-classified IRT model, along with three existing approaches applied in SET studies, including the cross-classified random effects model (CCREM), the multilevel item response theory (MLIRT) model, and a two-step integrated strategy, was reviewed. The strengths and weaknesses of each of the four approaches were also discussed. Additionally, the new and existing approaches were compared through an empirical data analysis and a preliminary simulation study. This article concluded by providing general suggestions to researchers for analyzing SET data and discussing limitations and future research directions.
在北美的高等教育机构中,学生教学评价(SET)问卷被广泛应用于形成性和总结性教学评价。从 SET 问卷中收集的数据通常是具有交叉分类结构的项目级数据,其特点是多变量分类结果(即问卷中有多个李克特类型的项目)和交叉分类结构(即非嵌套的学生和教师)。最近,有人提出了一种新方法,即交叉分类 IRT 模型,用于适当处理 SET 数据。为了给高等教育研究人员提供参考,本文回顾了交叉分类 IRT 模型以及应用于 SET 研究的三种现有方法,包括交叉分类随机效应模型 (CCREM)、多层次项目反应理论 (MLIRT) 模型和两步综合策略。还讨论了这四种方法各自的优缺点。此外,还通过实证数据分析和初步模拟研究对新方法和现有方法进行了比较。文章最后为研究人员提供了分析 SET 数据的一般建议,并讨论了局限性和未来研究方向。
{"title":"Approaches to Item-Level Data with Cross-Classified Structure: An Illustration with Student Evaluation of Teaching.","authors":"Sijia Huang","doi":"10.1080/00273171.2023.2288589","DOIUrl":"10.1080/00273171.2023.2288589","url":null,"abstract":"<p><p>Student evaluation of teaching (SET) questionnaires are ubiquitously applied in higher education institutions in North America for both formative and summative purposes. Data collected from SET questionnaires are usually item-level data with cross-classified structure, which are characterized by multivariate categorical outcomes (i.e., multiple Likert-type items in the questionnaires) and cross-classified structure (i.e., non-nested students and instructors). Recently, a new approach, namely the cross-classified IRT model, was proposed for appropriately handling SET data. To inform researchers in higher education, in this article, the cross-classified IRT model, along with three existing approaches applied in SET studies, including the cross-classified random effects model (CCREM), the multilevel item response theory (MLIRT) model, and a two-step integrated strategy, was reviewed. The strengths and weaknesses of each of the four approaches were also discussed. Additionally, the new and existing approaches were compared through an empirical data analysis and a preliminary simulation study. This article concluded by providing general suggestions to researchers for analyzing SET data and discussing limitations and future research directions.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139731016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-15DOI: 10.1080/00273171.2024.2310429
Kayla M Garner
{"title":"The Forgotten Trade-off between Internal Consistency and Validity.","authors":"Kayla M Garner","doi":"10.1080/00273171.2024.2310429","DOIUrl":"10.1080/00273171.2024.2310429","url":null,"abstract":"","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}