Consider a two-way ANOVA design. Generally, interactions are characterized by the difference between two measures of effect size. Typically the measure of effect size is based on the difference between measures of location, with the difference between means being the most common choice. This paper deals with extending extant results to two robust, heteroscedastic measures of effect size. The first is a robust, heteroscedastic analogue of Cohen's d. The second characterizes effect size in terms of the quantiles of the null distribution. Simulation results indicate that a percentile bootstrap method yields reasonably accurate confidence intervals. Data from an actual study are used to illustrate how these measures of effect size can add perspective when comparing groups.
{"title":"Two-way ANOVA: Inferences about interactions based on robust measures of effect size","authors":"Rand R. Wilcox","doi":"10.1111/bmsp.12244","DOIUrl":"10.1111/bmsp.12244","url":null,"abstract":"<p>Consider a two-way ANOVA design. Generally, interactions are characterized by the difference between two measures of effect size. Typically the measure of effect size is based on the difference between measures of location, with the difference between means being the most common choice. This paper deals with extending extant results to two robust, heteroscedastic measures of effect size. The first is a robust, heteroscedastic analogue of Cohen's <i>d</i>. The second characterizes effect size in terms of the quantiles of the null distribution. Simulation results indicate that a percentile bootstrap method yields reasonably accurate confidence intervals. Data from an actual study are used to illustrate how these measures of effect size can add perspective when comparing groups.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12244","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38871399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meta-analyses of correlation coefficients are an important technique to integrate results from many cross-sectional and longitudinal research designs. Uncertainty in pooled estimates is typically assessed with the help of confidence intervals, which can double as hypothesis tests for two-sided hypotheses about the underlying correlation. A standard approach to construct confidence intervals for the main effect is the Hedges-Olkin-Vevea Fisher-z (HOVz) approach, which is based on the Fisher-z transformation. Results from previous studies (Field, 2005, Psychol. Meth., 10, 444; Hafdahl and Williams, 2009, Psychol. Meth., 14, 24), however, indicate that in random-effects models the performance of the HOVz confidence interval can be unsatisfactory. To this end, we propose improvements of the HOVz approach, which are based on enhanced variance estimators for the main effect estimate. In order to study the coverage of the new confidence intervals in both fixed- and random-effects meta-analysis models, we perform an extensive simulation study, comparing them to established approaches. Data were generated via a truncated normal and beta distribution model. The results show that our newly proposed confidence intervals based on a Knapp-Hartung-type variance estimator or robust heteroscedasticity consistent sandwich estimators in combination with the integral z-to-r transformation (Hafdahl, 2009, Br. J. Math. Stat. Psychol., 62, 233) provide more accurate coverage than existing approaches in most scenarios, especially in the more appropriate beta distribution simulation model.
{"title":"Fisher transformation based confidence intervals of correlations in fixed- and random-effects meta-analysis","authors":"Thilo Welz, Philipp Doebler, Markus Pauly","doi":"10.1111/bmsp.12242","DOIUrl":"10.1111/bmsp.12242","url":null,"abstract":"<p>Meta-analyses of correlation coefficients are an important technique to integrate results from many cross-sectional and longitudinal research designs. Uncertainty in pooled estimates is typically assessed with the help of confidence intervals, which can double as hypothesis tests for two-sided hypotheses about the underlying correlation. A standard approach to construct confidence intervals for the main effect is the Hedges-Olkin-Vevea Fisher-z (HOVz) approach, which is based on the Fisher-z transformation. Results from previous studies (Field, 2005, <i>Psychol. Meth</i>., 10, 444; Hafdahl and Williams, 2009, <i>Psychol. Meth</i>., 14, 24), however, indicate that in random-effects models the performance of the HOVz confidence interval can be unsatisfactory. To this end, we propose improvements of the HOVz approach, which are based on enhanced variance estimators for the main effect estimate. In order to study the coverage of the new confidence intervals in both fixed- and random-effects meta-analysis models, we perform an extensive simulation study, comparing them to established approaches. Data were generated via a truncated normal and beta distribution model. The results show that our newly proposed confidence intervals based on a Knapp-Hartung-type variance estimator or robust heteroscedasticity consistent sandwich estimators in combination with the integral z-to-r transformation (Hafdahl, 2009, <i>Br. J. Math. Stat. Psychol</i>., 62, 233) provide more accurate coverage than existing approaches in most scenarios, especially in the more appropriate beta distribution simulation model.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38938463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Methods for the treatment of item non-response in attitudinal scales and in large-scale assessments under the pairwise likelihood (PL) estimation framework and under a missing at random (MAR) mechanism are proposed. Under a full information likelihood estimation framework and MAR, ignorability of the missing data mechanism does not lead to biased estimates. However, this is not the case for pseudo-likelihood approaches such as the PL. We develop and study the performance of three strategies for incorporating missing values into confirmatory factor analysis under the PL framework, the complete-pairs (CP), the available-cases (AC) and the doubly robust (DR) approaches. The CP and AC require only a model for the observed data and standard errors are easy to compute. Doubly-robust versions of the PL estimation require a predictive model for the missing responses given the observed ones and are computationally more demanding than the AC and CP. A simulation study is used to compare the proposed methods. The proposed methods are employed to analyze the UK data on numeracy and literacy collected as part of the OECD Survey of Adult Skills.
{"title":"Pairwise likelihood estimation for confirmatory factor analysis models with categorical variables and data that are missing at random","authors":"Myrsini Katsikatsou, Irini Moustaki, Haziq Jamil","doi":"10.1111/bmsp.12243","DOIUrl":"10.1111/bmsp.12243","url":null,"abstract":"<p>Methods for the treatment of item non-response in attitudinal scales and in large-scale assessments under the pairwise likelihood (PL) estimation framework and under a missing at random (MAR) mechanism are proposed. Under a full information likelihood estimation framework and MAR, ignorability of the missing data mechanism does not lead to biased estimates. However, this is not the case for pseudo-likelihood approaches such as the PL. We develop and study the performance of three strategies for incorporating missing values into confirmatory factor analysis under the PL framework, the complete-pairs (CP), the available-cases (AC) and the doubly robust (DR) approaches. The CP and AC require only a model for the observed data and standard errors are easy to compute. Doubly-robust versions of the PL estimation require a predictive model for the missing responses given the observed ones and are computationally more demanding than the AC and CP. A simulation study is used to compare the proposed methods. The proposed methods are employed to analyze the UK data on numeracy and literacy collected as part of the OECD Survey of Adult Skills.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12243","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38876007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The non-response model in Knott et al. (1991, Statistician, 40, 217) can be represented as a tree model with one branch for response/non-response and another branch for correct/incorrect response, and each branch probability is characterized by an item response theory model. In the model, it is assumed that there is only one source of non-responses. However, in questionnaires or educational tests, non-responses might come from different sources, such as test speededness, inability to answer, lack of motivation, and sensitive questions. To better accommodate such more realistic underlying mechanisms, we propose a a tree model with four end nodes, not all distinct, for non-response modelling. The Laplace-approximated maximum likelihood estimation for the proposed model is suggested. The validation of the proposed estimation procedure and the advantage of the proposed model over traditional methods are demonstrated in simulations. For illustration, the methodologies are applied to data from the 2012 Programme for International Student Assessment (PISA). The analysis shows that the proposed tree model has a better fit to PISA data than other existing models, providing a useful tool to distinguish the sources of non-responses.
Knott et al. (1991, Statistician, 40,217)的非反应模型可以表示为一个树模型,其中一个分支是反应/不反应,另一个分支是正确/不正确的反应,每个分支的概率用一个项目反应理论模型来表征。在该模型中,假设只有一个非响应源。然而,在问卷调查或教育测试中,无反应可能来自不同的来源,例如考试速度过快,无法回答,缺乏动力,以及敏感的问题。为了更好地适应这种更现实的潜在机制,我们提出了一个具有四个终端节点的树模型,并非所有节点都是不同的,用于非响应建模。对所提出的模型提出了拉普拉斯近似最大似然估计。仿真结果验证了所提估计方法的有效性以及所提模型相对于传统方法的优越性。为了说明,这些方法应用于2012年国际学生评估项目(PISA)的数据。分析表明,提出的树模型比其他现有模型更适合PISA数据,提供了一个有用的工具来区分非响应的来源。
{"title":"An item response tree model with not-all-distinct end nodes for non-response modelling","authors":"Yu-Wei Chang, Nan-Jung Hsu, Rung-Ching Tsai","doi":"10.1111/bmsp.12236","DOIUrl":"10.1111/bmsp.12236","url":null,"abstract":"<p>The non-response model in Knott <i>et al</i>. (1991, <i>Statistician</i>, <i>40</i>, 217) can be represented as a tree model with one branch for response/non-response and another branch for correct/incorrect response, and each branch probability is characterized by an item response theory model. In the model, it is assumed that there is only one source of non-responses. However, in questionnaires or educational tests, non-responses might come from different sources, such as test speededness, inability to answer, lack of motivation, and sensitive questions. To better accommodate such more realistic underlying mechanisms, we propose a a tree model with four end nodes, not all distinct, for non-response modelling. The Laplace-approximated maximum likelihood estimation for the proposed model is suggested. The validation of the proposed estimation procedure and the advantage of the proposed model over traditional methods are demonstrated in simulations. For illustration, the methodologies are applied to data from the 2012 Programme for International Student Assessment (PISA). The analysis shows that the proposed tree model has a better fit to PISA data than other existing models, providing a useful tool to distinguish the sources of non-responses.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12236","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25537318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extended redundancy analysis (ERA) is used to reduce multiple sets of predictors to a smaller number of components and examine the effects of these components on a response variable. In various social and behavioural studies, auxiliary covariates (e.g., gender, ethnicity) can often lead to heterogeneous subgroups of observations, each of which involves distinctive relationships between predictor and response variables. ERA is currently unable to consider such covariate-dependent heterogeneity to examine whether the model parameters vary across subgroups differentiated by covariates. To address this issue, we combine ERA with model-based recursive partitioning in a single framework. This combined method, MOB-ERA, aims to partition observations into heterogeneous subgroups recursively based on a set of covariates while fitting a specified ERA model to data. Upon the completion of the partitioning procedure, one can easily examine the difference in the estimated ERA parameters across covariate-dependent subgroups. Moreover, it produces a tree diagram that aids in visualizing a hierarchy of partitioning covariates, as well as interpreting their interactions. In the analysis of public data concerning nicotine dependence among US adults, the method uncovered heterogeneous subgroups characterized by several sociodemographic covariates, each of which yielded different directional relationships between three predictor sets and nicotine dependence.
{"title":"Model-based recursive partitioning of extended redundancy analysis with an application to nicotine dependence among US adults","authors":"Sunmee Kim, Heungsun Hwang","doi":"10.1111/bmsp.12240","DOIUrl":"10.1111/bmsp.12240","url":null,"abstract":"<p>Extended redundancy analysis (ERA) is used to reduce multiple sets of predictors to a smaller number of components and examine the effects of these components on a response variable. In various social and behavioural studies, auxiliary covariates (e.g., gender, ethnicity) can often lead to heterogeneous subgroups of observations, each of which involves distinctive relationships between predictor and response variables. ERA is currently unable to consider such covariate-dependent heterogeneity to examine whether the model parameters vary across subgroups differentiated by covariates. To address this issue, we combine ERA with model-based recursive partitioning in a single framework. This combined method, MOB-ERA, aims to partition observations into heterogeneous subgroups recursively based on a set of covariates while fitting a specified ERA model to data. Upon the completion of the partitioning procedure, one can easily examine the difference in the estimated ERA parameters across covariate-dependent subgroups. Moreover, it produces a tree diagram that aids in visualizing a hierarchy of partitioning covariates, as well as interpreting their interactions. In the analysis of public data concerning nicotine dependence among US adults, the method uncovered heterogeneous subgroups characterized by several sociodemographic covariates, each of which yielded different directional relationships between three predictor sets and nicotine dependence.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12240","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25529965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years a number of articles have focused on the identifiability of the basic local independence model. The identifiability issue usually concerns two model parameter sets predicting an identical probability distribution on the response patterns. Both parameter sets are applied to the same knowledge structure. However, nothing is known about cases where different knowledge structures predict the same probability distribution. This situation is referred to as ʻempirical indistinguishabilityʼ between two structures and is the main subject of the present paper. Empirical indistinguishability is a stronger form of unidentifiability, which involves not only the parameters, but also the structural and combinatorial properties of the model. In particular, as far as knowledge structures are concerned, a consequence of empirical indistinguishability is that the existence of certain knowledge states cannot be empirically established. Most importantly, it is shown that model identifiability cannot guarantee that a certain knowledge structure is empirically distinguishable from others. The theoretical findings are exemplified in a number of different empirical scenarios.
{"title":"On the empirical indistinguishability of knowledge structures","authors":"Luca Stefanutti, Andrea Spoto","doi":"10.1111/bmsp.12235","DOIUrl":"10.1111/bmsp.12235","url":null,"abstract":"<p>In recent years a number of articles have focused on the identifiability of the basic local independence model. The identifiability issue usually concerns two model parameter sets predicting an identical probability distribution on the response patterns. Both parameter sets are applied to the same knowledge structure. However, nothing is known about cases where different knowledge structures predict the same probability distribution. This situation is referred to as ʻempirical indistinguishabilityʼ between two structures and is the main subject of the present paper. Empirical indistinguishability is a stronger form of unidentifiability, which involves not only the parameters, but also the structural and combinatorial properties of the model. In particular, as far as knowledge structures are concerned, a consequence of empirical indistinguishability is that the existence of certain knowledge states cannot be empirically established. Most importantly, it is shown that model identifiability cannot guarantee that a certain knowledge structure is empirically distinguishable from others. The theoretical findings are exemplified in a number of different empirical scenarios.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12235","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25530563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The three-parameter logistic model is widely used to model the responses to a proficiency test when the examinees can guess the correct response, as is the case for multiple-choice items. However, the weak identifiability of the parameters of the model results in large variability of the estimates and in convergence difficulties in the numerical maximization of the likelihood function. To overcome these issues, in this paper we explore various shrinkage estimation methods, following two main approaches. First, a ridge-type penalty on the guessing parameters is introduced in the likelihood function. The tuning parameter is then selected through various approaches: cross-validation, information criteria or using an empirical Bayes method. The second approach explored is based on the methodology developed to reduce the bias of the maximum likelihood estimator through an adjusted score equation. The performance of the methods is investigated through simulation studies and a real data example.
{"title":"Shrinkage estimation of the three-parameter logistic model","authors":"Michela Battauz, Ruggero Bellio","doi":"10.1111/bmsp.12241","DOIUrl":"10.1111/bmsp.12241","url":null,"abstract":"<p>The three-parameter logistic model is widely used to model the responses to a proficiency test when the examinees can guess the correct response, as is the case for multiple-choice items. However, the weak identifiability of the parameters of the model results in large variability of the estimates and in convergence difficulties in the numerical maximization of the likelihood function. To overcome these issues, in this paper we explore various shrinkage estimation methods, following two main approaches. First, a ridge-type penalty on the guessing parameters is introduced in the likelihood function. The tuning parameter is then selected through various approaches: cross-validation, information criteria or using an empirical Bayes method. The second approach explored is based on the methodology developed to reduce the bias of the maximum likelihood estimator through an adjusted score equation. The performance of the methods is investigated through simulation studies and a real data example.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12241","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25490957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop factor copula models to analyse the dependence among mixed continuous and discrete responses. Factor copula models are canonical vine copulas that involve both observed and latent variables, hence they allow tail, asymmetric and nonlinear dependence. They can be explained as conditional independence models with latent variables that do not necessarily have an additive latent structure. We focus on important issues of interest to the social data analyst, such as model selection and goodness of fit. Our general methodology is demonstrated with an extensive simulation study and illustrated by reanalysing three mixed response data sets. Our studies suggest that there can be a substantial improvement over the standard factor model for mixed data and make the argument for moving to factor copula models.
{"title":"Factor copula models for mixed data","authors":"Sayed H. Kadhem, Aristidis K. Nikoloulopoulos","doi":"10.1111/bmsp.12231","DOIUrl":"10.1111/bmsp.12231","url":null,"abstract":"<p>We develop factor copula models to analyse the dependence among mixed continuous and discrete responses. Factor copula models are canonical vine copulas that involve both observed and latent variables, hence they allow tail, asymmetric and nonlinear dependence. They can be explained as conditional independence models with latent variables that do not necessarily have an additive latent structure. We focus on important issues of interest to the social data analyst, such as model selection and goodness of fit. Our general methodology is demonstrated with an extensive simulation study and illustrated by reanalysing three mixed response data sets. Our studies suggest that there can be a substantial improvement over the standard factor model for mixed data and make the argument for moving to factor copula models.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12231","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39499222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Principal covariate regression (PCOVR) is a method for regressing a set of criterion variables with respect to a set of predictor variables when the latter are many in number and/or collinear. This is done by extracting a limited number of components that simultaneously synthesize the predictor variables and predict the criterion ones. So far, no procedure has been offered for estimating statistical uncertainties of the obtained PCOVR parameter estimates. The present paper shows how this goal can be achieved, conditionally on the model specification, by means of the bootstrap approach. Four strategies for estimating bootstrap confidence intervals are derived and their statistical behaviour in terms of coverage is assessed by means of a simulation experiment. Such strategies are distinguished by the use of the varimax and quartimin procedures and by the use of Procrustes rotations of bootstrap solutions towards the sample solution. In general, the four strategies showed appropriate statistical behaviour, with coverage tending to the desired level for increasing sample sizes. The main exception involved strategies based on the quartimin procedure in cases characterized by complex underlying structures of the components. The appropriateness of the statistical behaviour was higher when the proper number of components were extracted.
{"title":"Bootstrap confidence intervals for principal covariates regression","authors":"Paolo Giordani, Henk A. L. Kiers","doi":"10.1111/bmsp.12238","DOIUrl":"10.1111/bmsp.12238","url":null,"abstract":"<p>Principal covariate regression (PCOVR) is a method for regressing a set of criterion variables with respect to a set of predictor variables when the latter are many in number and/or collinear. This is done by extracting a limited number of components that simultaneously synthesize the predictor variables and predict the criterion ones. So far, no procedure has been offered for estimating statistical uncertainties of the obtained PCOVR parameter estimates. The present paper shows how this goal can be achieved, conditionally on the model specification, by means of the bootstrap approach. Four strategies for estimating bootstrap confidence intervals are derived and their statistical behaviour in terms of coverage is assessed by means of a simulation experiment. Such strategies are distinguished by the use of the varimax and quartimin procedures and by the use of Procrustes rotations of bootstrap solutions towards the sample solution. In general, the four strategies showed appropriate statistical behaviour, with coverage tending to the desired level for increasing sample sizes. The main exception involved strategies based on the quartimin procedure in cases characterized by complex underlying structures of the components. The appropriateness of the statistical behaviour was higher when the proper number of components were extracted.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/bmsp.12238","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25409884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}