首页 > 最新文献

Educational and Psychological Measurement最新文献

英文 中文
Assessing Dimensionality of IRT Models Using Traditional and Revised Parallel Analyses. 使用传统和修订的平行分析法评估 IRT 模型的维度。
IF 2.7 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-01 Epub Date: 2022-07-21 DOI: 10.1177/00131644221111838
Wenjing Guo, Youn-Jeng Choi

Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been systematically investigated. Therefore, we evaluated the accuracy of traditional and revised parallel analyses for determining the number of underlying dimensions in the IRT framework by conducting simulation studies. Six data generation factors were manipulated: number of observations, test length, type of generation models, number of dimensions, correlations between dimensions, and item discrimination. Results indicated that (a) when the generated IRT model is unidimensional, across all simulation conditions, traditional parallel analysis using principal component analysis and tetrachoric correlation performs best; (b) when the generated IRT model is multidimensional, traditional parallel analysis using principal component analysis and tetrachoric correlation yields the highest proportion of accurately identified underlying dimensions across all factors, except when the correlation between dimensions is 0.8 or the item discrimination is low; and (c) under a few combinations of simulated factors, none of the eight methods performed well (e.g., when the generation model is three-dimensional 3PL, the item discrimination is low, and the correlation between dimensions is 0.8).

在将项目反应理论(IRT)模型应用于数据时,确定维度的数量极为重要。在因子分析框架内提出了传统的平行分析法和修订的平行分析法,这两种方法在评估维度方面都显示出了一定的前景。但是,它们在 IRT 框架中的表现还没有得到系统的研究。因此,我们通过进行模拟研究,评估了传统并行分析法和修订并行分析法在 IRT 框架中确定基本维度数量的准确性。我们操纵了六个数据生成因素:观察数、测验长度、生成模型类型、维度数、维度间相关性和项目区分度。结果表明:(a) 当生成的 IRT 模型为单维模型时,在所有模拟条件下,使用主成分分析和四元相关的传统平行分析法表现最佳;(b) 当生成的 IRT 模型为多维模型时,使用主成分分析和四元相关的传统平行分析法在所有因素中准确识别出的基本维度比例最高,但维度间相关性为 0.8 或项目区分度较低时除外;(c) 在少数模拟因素组合下,八种方法均表现不佳(例如,当生成模型为三维模型时,使用主成分分析和四元相关的传统平行分析法表现最佳;当生成模型为四维模型时,使用主成分分析和四元相关的传统平行分析法表现最佳;当生成模型为五维模型时,使用主成分分析和四元相关的传统平行分析法表现最佳、当生成模型为三维 3PL 时,项目区分度低,维度间相关性为 0.8)。
{"title":"Assessing Dimensionality of IRT Models Using Traditional and Revised Parallel Analyses.","authors":"Wenjing Guo, Youn-Jeng Choi","doi":"10.1177/00131644221111838","DOIUrl":"10.1177/00131644221111838","url":null,"abstract":"<p><p>Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been systematically investigated. Therefore, we evaluated the accuracy of traditional and revised parallel analyses for determining the number of underlying dimensions in the IRT framework by conducting simulation studies. Six data generation factors were manipulated: number of observations, test length, type of generation models, number of dimensions, correlations between dimensions, and item discrimination. Results indicated that (a) when the generated IRT model is unidimensional, across all simulation conditions, traditional parallel analysis using principal component analysis and tetrachoric correlation performs best; (b) when the generated IRT model is multidimensional, traditional parallel analysis using principal component analysis and tetrachoric correlation yields the highest proportion of accurately identified underlying dimensions across all factors, except when the correlation between dimensions is 0.8 or the item discrimination is low; and (c) under a few combinations of simulated factors, none of the eight methods performed well (e.g., when the generation model is three-dimensional 3PL, the item discrimination is low, and the correlation between dimensions is 0.8).</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 3","pages":"609-629"},"PeriodicalIF":2.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10177320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9475858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Changes in the Speed-Ability Relation Through Different Treatments of Rapid Guessing. 快速猜测的不同处理方式对速度-能力关系的影响。
IF 2.1 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-01 Epub Date: 2022-07-11 DOI: 10.1177/00131644221109490
Tobias Deribo, Frank Goldhammer, Ulf Kroehne

As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a response given under rapid-guessing behavior does bias constructs and relations of interest. Bias also appears reasonable for latent speed estimates obtained under rapid-guessing behavior, as well as the identified relation between speed and ability. This bias seems especially problematic considering that the relation between speed and ability has been shown to be able to improve precision in ability estimation. For this reason, we investigate if and how responses and response times obtained under rapid-guessing behavior affect the identified speed-ability relation and the precision of ability estimates in a joint model of speed and ability. Therefore, the study presents an empirical application that highlights a specific methodological problem resulting from rapid-guessing behavior. Here, we could show that different (non-)treatments of rapid guessing can lead to different conclusions about the underlying speed-ability relation. Furthermore, different rapid-guessing treatments led to wildly different conclusions about gains in precision through joint modeling. The results show the importance of taking rapid guessing into account when the psychometric use of response times is of interest.

作为社会科学领域的研究人员,我们经常有兴趣通过评估和问卷调查来研究无法直接观察到的构造。但是,即使是在精心设计和实施的研究中,也可能会出现快速猜测行为。在快速猜测行为下,任务会被快速浏览,而不是深入阅读和参与。因此,在快速猜测行为下做出的回答会对相关的构造和关系产生偏差。对于在快速猜测行为下获得的潜在速度估计值以及速度与能力之间的关系,偏差似乎也是合理的。考虑到速度与能力之间的关系已被证明能够提高能力估计的精确度,这种偏差似乎尤其成问题。为此,我们研究了在快速猜测行为下获得的反应和反应时间是否以及如何影响速度与能力之间的关系以及速度与能力联合模型中能力估计的精确度。因此,本研究提出了一个实证应用,强调了快速猜测行为导致的特定方法问题。在这里,我们可以证明,对快速猜测的不同(非)处理会导致对基本速度-能力关系的不同结论。此外,不同的快速猜测处理方法会导致对通过联合建模提高精确度的结论大相径庭。这些结果表明,在对反应时间的心理测量使用感兴趣时,将快速猜测考虑在内非常重要。
{"title":"Changes in the Speed-Ability Relation Through Different Treatments of Rapid Guessing.","authors":"Tobias Deribo, Frank Goldhammer, Ulf Kroehne","doi":"10.1177/00131644221109490","DOIUrl":"10.1177/00131644221109490","url":null,"abstract":"<p><p>As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a response given under rapid-guessing behavior does bias constructs and relations of interest. Bias also appears reasonable for latent speed estimates obtained under rapid-guessing behavior, as well as the identified relation between speed and ability. This bias seems especially problematic considering that the relation between speed and ability has been shown to be able to improve precision in ability estimation. For this reason, we investigate if and how responses and response times obtained under rapid-guessing behavior affect the identified speed-ability relation and the precision of ability estimates in a joint model of speed and ability. Therefore, the study presents an empirical application that highlights a specific methodological problem resulting from rapid-guessing behavior. Here, we could show that different (non-)treatments of rapid guessing can lead to different conclusions about the underlying speed-ability relation. Furthermore, different rapid-guessing treatments led to wildly different conclusions about gains in precision through joint modeling. The results show the importance of taking rapid guessing into account when the psychometric use of response times is of interest.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 3","pages":"473-494"},"PeriodicalIF":2.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10177319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9846842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Sample Size and Various Other Factors on Estimation of Dichotomous Mixture IRT Models. 样本大小和其他各种因素对二分法混合 IRT 模型估计的影响。
IF 2.7 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-01 Epub Date: 2022-05-19 DOI: 10.1177/00131644221094325
Sedat Sen, Allan S Cohen

The purpose of this study was to examine the effects of different data conditions on item parameter recovery and classification accuracy of three dichotomous mixture item response theory (IRT) models: the Mix1PL, Mix2PL, and Mix3PL. Manipulated factors in the simulation included the sample size (11 different sample sizes from 100 to 5000), test length (10, 30, and 50), number of classes (2 and 3), the degree of latent class separation (normal/no separation, small, medium, and large), and class sizes (equal vs. nonequal). Effects were assessed using root mean square error (RMSE) and classification accuracy percentage computed between true parameters and estimated parameters. The results of this simulation study showed that more precise estimates of item parameters were obtained with larger sample sizes and longer test lengths. Recovery of item parameters decreased as the number of classes increased with the decrease in sample size. Recovery of classification accuracy for the conditions with two-class solutions was also better than that of three-class solutions. Results of both item parameter estimates and classification accuracy differed by model type. More complex models and models with larger class separations produced less accurate results. The effect of the mixture proportions also differentially affected RMSE and classification accuracy results. Groups of equal size produced more precise item parameter estimates, but the reverse was the case for classification accuracy results. Results suggested that dichotomous mixture IRT models required more than 2,000 examinees to be able to obtain stable results as even shorter tests required such large sample sizes for more precise estimates. This number increased as the number of latent classes, the degree of separation, and model complexity increased.

本研究的目的是检验不同数据条件对三种二分混合项目反应理论(IRT)模型(Mix1PL、Mix2PL 和 Mix3PL)的项目参数恢复和分类准确性的影响。模拟中的操纵因素包括样本量(从 100 到 5000 的 11 种不同样本量)、测试长度(10、30 和 50)、类数(2 和 3)、潜类分离程度(正常/不分离、小、中、大)和类大小(相等与不相等)。通过计算真实参数和估计参数之间的均方根误差(RMSE)和分类准确率百分比来评估效果。模拟研究结果表明,样本量越大、测试时间越长,项目参数的估计值越精确。随着样本量的减少,类别数增加,项目参数的恢复率下降。两类方案条件下的分类准确率恢复也优于三类方案条件下的分类准确率恢复。项目参数估计和分类准确率的结果因模型类型而异。更复杂的模型和类别分离更大的模型产生的结果准确性更低。混合比例的影响也会对均方根误差和分类精确度结果产生不同的影响。大小相等的组产生的项目参数估计更精确,但分类精确度结果则相反。结果表明,二分法混合 IRT 模型需要超过 2,000 名受试者才能获得稳定的结果,因为即使是较短的测验也需要如此大的样本量才能获得更精确的估计值。随着潜类数量、分离程度和模型复杂性的增加,这一数字也在增加。
{"title":"The Impact of Sample Size and Various Other Factors on Estimation of Dichotomous Mixture IRT Models.","authors":"Sedat Sen, Allan S Cohen","doi":"10.1177/00131644221094325","DOIUrl":"10.1177/00131644221094325","url":null,"abstract":"<p><p>The purpose of this study was to examine the effects of different data conditions on item parameter recovery and classification accuracy of three dichotomous mixture item response theory (IRT) models: the Mix1PL, Mix2PL, and Mix3PL. Manipulated factors in the simulation included the sample size (11 different sample sizes from 100 to 5000), test length (10, 30, and 50), number of classes (2 and 3), the degree of latent class separation (normal/no separation, small, medium, and large), and class sizes (equal vs. nonequal). Effects were assessed using root mean square error (RMSE) and classification accuracy percentage computed between true parameters and estimated parameters. The results of this simulation study showed that more precise estimates of item parameters were obtained with larger sample sizes and longer test lengths. Recovery of item parameters decreased as the number of classes increased with the decrease in sample size. Recovery of classification accuracy for the conditions with two-class solutions was also better than that of three-class solutions. Results of both item parameter estimates and classification accuracy differed by model type. More complex models and models with larger class separations produced less accurate results. The effect of the mixture proportions also differentially affected RMSE and classification accuracy results. Groups of equal size produced more precise item parameter estimates, but the reverse was the case for classification accuracy results. Results suggested that dichotomous mixture IRT models required more than 2,000 examinees to be able to obtain stable results as even shorter tests required such large sample sizes for more precise estimates. This number increased as the number of latent classes, the degree of separation, and model complexity increased.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 3","pages":"520-555"},"PeriodicalIF":2.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10177317/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9475859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Small Sample Correction for Factor Score Regression. 因子得分回归的小样本校正。
IF 2.7 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-01 Epub Date: 2022-07-02 DOI: 10.1177/00131644221105505
Jasper Bogaert, Wen Wei Loh, Yves Rosseel

Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error in the factor scores. The method of Croon (MOC) is a well-known bias correction technique. However, its standard implementation can render poor quality estimates in small samples (e.g. less than 100). This article aims to develop a small sample correction (SSC) that integrates two different modifications to the standard MOC. We conducted a simulation study to compare the empirical performance of (a) standard SEM, (b) the standard MOC, (c) naive FSR, and (d) the MOC with the proposed SSC. In addition, we assessed the robustness of the performance of the SSC in various models with a different number of predictors and indicators. The results showed that the MOC with the proposed SSC yielded smaller mean squared errors than SEM and the standard MOC in small samples and performed similarly to naive FSR. However, naive FSR yielded more biased estimates than the proposed MOC with SSC, by failing to account for measurement error in the factor scores.

因子得分回归(FSR)作为传统结构方程模型(SEM)的一种便捷替代方法,被广泛用于评估潜变量之间的结构关系。但是,当潜在变量被简单地替换为因子得分时,由于因子得分的测量误差,结构参数估计的偏差往往需要修正。克罗恩方法(MOC)是一种著名的偏差校正技术。然而,在小样本(如少于 100 个样本)情况下,其标准实施可能会导致估算质量低下。本文旨在开发一种小样本校正方法(SSC),它整合了对标准 MOC 的两种不同修正。我们进行了一项模拟研究,比较了 (a) 标准 SEM、(b) 标准 MOC、(c) 天真 FSR 和 (d) MOC 与建议的 SSC 的经验性能。此外,我们还评估了 SSC 在具有不同数量预测因子和指标的各种模型中的稳健性。结果表明,与 SEM 和标准 MOC 相比,在小样本中,建议 SSC 的 MOC 产生的均方误差更小,性能与天真 FSR 相似。然而,由于未能考虑因子得分的测量误差,天真 FSR 比拟议的带 SSC 的 MOC 产生了更多偏差估计。
{"title":"A Small Sample Correction for Factor Score Regression.","authors":"Jasper Bogaert, Wen Wei Loh, Yves Rosseel","doi":"10.1177/00131644221105505","DOIUrl":"10.1177/00131644221105505","url":null,"abstract":"<p><p>Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error in the factor scores. The method of Croon (MOC) is a well-known bias correction technique. However, its standard implementation can render poor quality estimates in small samples (e.g. less than 100). This article aims to develop a small sample correction (SSC) that integrates two different modifications to the standard MOC. We conducted a simulation study to compare the empirical performance of (a) standard SEM, (b) the standard MOC, (c) naive FSR, and (d) the MOC with the proposed SSC. In addition, we assessed the robustness of the performance of the SSC in various models with a different number of predictors and indicators. The results showed that the MOC with the proposed SSC yielded smaller mean squared errors than SEM and the standard MOC in small samples and performed similarly to naive FSR. However, naive FSR yielded more biased estimates than the proposed MOC with SSC, by failing to account for measurement error in the factor scores.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 3","pages":"495-519"},"PeriodicalIF":2.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10177321/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10349847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is the Area Under Curve Appropriate for Evaluating the Fit of Psychometric Models? 曲线下面积是否适合用于评估心理测量模型的拟合度?
IF 2.7 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-01 Epub Date: 2022-05-24 DOI: 10.1177/00131644221098182
Yuting Han, Jihong Zhang, Zhehan Jiang, Dexin Shi

In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as χ2, M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian information criterion (BIC) for relative comparisons. Recent developments show a merging trend of psychometric and machine learnings, yet there remains a gap in the model fit evaluation, specifically the use of the area under curve (AUC). This study focuses on the behaviors of AUC in fitting IRT models. Rounds of simulations were conducted to investigate AUC's appropriateness (e.g., power and Type I error rate) under various conditions. The results show that AUC possessed certain advantages under certain conditions such as high-dimensional structure with two-parameter logistic (2PL) and some three-parameter logistic (3PL) models, while disadvantages were also obvious when the true model is unidimensional. It cautions researchers about the dangers of using AUC solely in evaluating psychometric models.

在现代心理测量建模(主要与项目反应理论(IRT)相关)的文献中,模型的拟合度是通过已知的指标来评估的,如用于绝对评估的χ2、M2 和近似均方根误差(RMSEA),以及用于相对比较的 Akaike 信息准则(AIC)、一致 AIC(CAIC)和贝叶斯信息准则(BIC)。最近的发展显示了心理测量和机器学习的融合趋势,但在模型拟合度评估,特别是曲线下面积(AUC)的使用方面仍存在差距。本研究的重点是 AUC 在 IRT 模型拟合中的表现。研究人员进行了多轮模拟,以调查 AUC 在各种条件下的适当性(如功率和 I 类错误率)。结果表明,AUC 在某些条件下具有一定的优势,如高维结构的双参数逻辑(2PL)模型和某些三参数逻辑(3PL)模型,而当真实模型为单维模型时,AUC 的劣势也很明显。它提醒研究人员在评估心理测量模型时仅使用 AUC 的危险性。
{"title":"Is the Area Under Curve Appropriate for Evaluating the Fit of Psychometric Models?","authors":"Yuting Han, Jihong Zhang, Zhehan Jiang, Dexin Shi","doi":"10.1177/00131644221098182","DOIUrl":"10.1177/00131644221098182","url":null,"abstract":"<p><p>In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as χ<sup>2</sup>, M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian information criterion (BIC) for relative comparisons. Recent developments show a merging trend of psychometric and machine learnings, yet there remains a gap in the model fit evaluation, specifically the use of the area under curve (AUC). This study focuses on the behaviors of AUC in fitting IRT models. Rounds of simulations were conducted to investigate AUC's appropriateness (e.g., power and Type I error rate) under various conditions. The results show that AUC possessed certain advantages under certain conditions such as high-dimensional structure with two-parameter logistic (2PL) and some three-parameter logistic (3PL) models, while disadvantages were also obvious when the true model is unidimensional. It cautions researchers about the dangers of using AUC solely in evaluating psychometric models.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 3","pages":"586-608"},"PeriodicalIF":2.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10177322/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10299668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Polytomous Item Locations in Multicomponent Measuring Instruments: A Note on a Latent Variable Modeling Procedure. 评估多成分测量工具中的多项式项目位置:关于潜在变量建模程序的说明。
IF 2.7 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-01 Epub Date: 2022-03-02 DOI: 10.1177/00131644211072829
Tenko Raykov, Martin Pusic

This note is concerned with evaluation of location parameters for polytomous items in multiple-component measuring instruments. A point and interval estimation procedure for these parameters is outlined that is developed within the framework of latent variable modeling. The method permits educational, behavioral, biomedical, and marketing researchers to quantify important aspects of the functioning of items with ordered multiple response options, which follow the popular graded response model. The procedure is routinely and readily applicable in empirical studies using widely circulated software and is illustrated with empirical data.

本说明涉及多成分测量工具中多变量项目位置参数的评估。本文概述了在潜变量建模框架内开发的这些参数的点估计和区间估计程序。该方法允许教育、行为、生物医学和市场营销研究人员对具有有序多反应选项的项目功能的重要方面进行量化,这些项目遵循流行的分级反应模型。该程序可在经验研究中使用广泛流传的软件进行常规和随时应用,并用经验数据进行了说明。
{"title":"Evaluation of Polytomous Item Locations in Multicomponent Measuring Instruments: A Note on a Latent Variable Modeling Procedure.","authors":"Tenko Raykov, Martin Pusic","doi":"10.1177/00131644211072829","DOIUrl":"10.1177/00131644211072829","url":null,"abstract":"<p><p>This note is concerned with evaluation of location parameters for polytomous items in multiple-component measuring instruments. A point and interval estimation procedure for these parameters is outlined that is developed within the framework of latent variable modeling. The method permits educational, behavioral, biomedical, and marketing researchers to quantify important aspects of the functioning of items with ordered multiple response options, which follow the popular graded response model. The procedure is routinely and readily applicable in empirical studies using widely circulated software and is illustrated with empirical data.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 3","pages":"630-641"},"PeriodicalIF":2.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10177315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9846843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Bank Assembly and Block Selection in Multidimensional Forced-Choice Adaptive Assessments. 关于多维强制选择适应性评估中的组库和组块选择。
IF 2.1 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-04-01 Epub Date: 2022-04-28 DOI: 10.1177/00131644221087986
Rodrigo S Kreitchmann, Miguel A Sorrel, Francisco J Abad

Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of nonipsative scores from FC responses. However, while some authors indicate that blocks composed of opposite-keyed items are necessary to retrieve normative scores, others suggest that these blocks may be less robust to faking, thus impairing the assessment validity. Accordingly, this article presents a simulation study to investigate whether it is possible to retrieve normative scores using only positively keyed items in pairwise FC computerized adaptive testing (CAT). Specifically, a simulation study addressed the effect of (a) different bank assembly (with a randomly assembled bank, an optimally assembled bank, and blocks assembled on-the-fly considering every possible pair of items), and (b) block selection rules (i.e., T, and Bayesian D and A-rules) over the estimate accuracy and ipsativity and overlap rates. Moreover, different questionnaire lengths (30 and 60) and trait structures (independent or positively correlated) were studied, and a nonadaptive questionnaire was included as baseline in each condition. In general, very good trait estimates were retrieved, despite using only positively keyed items. Although the best trait accuracy and lowest ipsativity were found using the Bayesian A-rule with questionnaires assembled on-the-fly, the T-rule under this method led to the worst results. This points out to the importance of considering both aspects when designing FC CAT.

在非认知测评中,多维强迫选择(FC)问卷一直被认为可以减少社会期望应答和作假的影响。虽然根据经典测验理论,FC 在提供同分分数方面一直被认为存在问题,但项目反应理论(IRT)模型却能从 FC 反应中估算出非同分分数。然而,尽管一些学者指出,由相反关键项目组成的区块对于检索常模分数是必要的,但另一些学者则认为,这些区块对作假的稳健性可能较差,从而损害了测评的有效性。因此,本文介绍了一项模拟研究,以探讨在成对 FC 计算机适应性测试(CAT)中仅使用正向键控项目是否有可能检索到常模分数。具体而言,模拟研究探讨了(a)不同题库组装(随机组装题库、优化组装题库以及考虑每一对可能的项目而即时组装的题块)和(b)题块选择规则(即 T、贝叶斯 D 和 A 规则)对估计准确率、同位率和重叠率的影响。此外,还研究了不同的问卷长度(30 和 60)和特质结构(独立或正相关),并且在每个条件中都包含一份非适应性问卷作为基线。总体而言,尽管只使用了正相关的项目,但仍获得了非常好的特质估计值。虽然使用贝叶斯 A-规则和即时编制的问卷可以获得最佳的特质准确度和最低的同位率,但使用该方法下的 T-规则却能获得最差的结果。这说明在设计 FC CAT 时考虑这两方面因素的重要性。
{"title":"On Bank Assembly and Block Selection in Multidimensional Forced-Choice Adaptive Assessments.","authors":"Rodrigo S Kreitchmann, Miguel A Sorrel, Francisco J Abad","doi":"10.1177/00131644221087986","DOIUrl":"10.1177/00131644221087986","url":null,"abstract":"<p><p>Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of nonipsative scores from FC responses. However, while some authors indicate that blocks composed of opposite-keyed items are necessary to retrieve normative scores, others suggest that these blocks may be less robust to faking, thus impairing the assessment validity. Accordingly, this article presents a simulation study to investigate whether it is possible to retrieve normative scores using only positively keyed items in pairwise FC computerized adaptive testing (CAT). Specifically, a simulation study addressed the effect of (a) different bank assembly (with a randomly assembled bank, an optimally assembled bank, and blocks assembled <i>on-the-fly</i> considering every possible pair of items), and (b) block selection rules (i.e., <b>T</b>, and Bayesian <b>D</b> and <b>A</b>-rules) over the estimate accuracy and ipsativity and overlap rates. Moreover, different questionnaire lengths (30 and 60) and trait structures (independent or positively correlated) were studied, and a nonadaptive questionnaire was included as baseline in each condition. In general, very good trait estimates were retrieved, despite using only positively keyed items. Although the best trait accuracy and lowest ipsativity were found using the Bayesian <b>A</b>-rule with questionnaires assembled <i>on-the-fly</i>, the <b>T</b>-rule under this method led to the worst results. This points out to the importance of considering both aspects when designing FC CAT.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 2","pages":"294-321"},"PeriodicalIF":2.1,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9972126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10823905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Confidence Intervals of Item Parameters When Some Item Parameters Take Priors in the 2PL and 3PL Models. 当某些项目参数在 2PL 和 3PL 模型中具有优先权时,调查项目参数的置信区间。
IF 2.7 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-04-01 Epub Date: 2022-05-16 DOI: 10.1177/00131644221096431
Insu Paek, Zhongtian Lin, Robert Philip Chalmers

To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori (MMAP) and posterior standard error (PSE) are estimated. Confidence intervals (CIs) for these parameters and other parameters which did not take any priors were investigated with popular prior distributions, different error covariance estimation methods, test lengths, and sample sizes. A seemingly paradoxical result was that, when priors were taken, the conditions of the error covariance estimation methods known to be better in the literature (Louis or Oakes method in this study) did not yield the best results for the CI performance, while the conditions of the cross-product method for the error covariance estimation which has the tendency of upward bias in estimating the standard errors exhibited better CI performance. Other important findings for the CI performance are also discussed.

为了减少用期望最大化边际极大似然估计法(MML-EM)估计2PL或3PL模型时出现海伍德情况或不收敛的机会,可以使用2PL模型中项目斜率参数或3PL模型中伪猜测参数的先验,并估计边际最大后验(MMAP)和后验标准误差(PSE)。通过使用流行的先验分布、不同的误差协方差估计方法、测试长度和样本量,研究了这些参数和其他未使用任何先验的参数的置信区间(CIs)。一个看似矛盾的结果是,当采用先验时,文献中已知较好的误差协方差估计方法的条件(本研究中的路易斯法或奥克斯法)并没有产生最佳的 CI 性能结果,而在估计标准误差时有向上偏差趋势的误差协方差估计的交叉积方法的条件则表现出较好的 CI 性能。本文还讨论了有关 CI 性能的其他重要发现。
{"title":"Investigating Confidence Intervals of Item Parameters When Some Item Parameters Take Priors in the 2PL and 3PL Models.","authors":"Insu Paek, Zhongtian Lin, Robert Philip Chalmers","doi":"10.1177/00131644221096431","DOIUrl":"10.1177/00131644221096431","url":null,"abstract":"<p><p>To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori (MMAP) and posterior standard error (PSE) are estimated. Confidence intervals (CIs) for these parameters and other parameters which did not take any priors were investigated with popular prior distributions, different error covariance estimation methods, test lengths, and sample sizes. A seemingly paradoxical result was that, when priors were taken, the conditions of the error covariance estimation methods known to be better in the literature (Louis or Oakes method in this study) did not yield the best results for the CI performance, while the conditions of the cross-product method for the error covariance estimation which has the tendency of upward bias in estimating the standard errors exhibited better CI performance. Other important findings for the CI performance are also discussed.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 2","pages":"375-400"},"PeriodicalIF":2.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9972130/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10823906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multidimensional Forced-Choice CAT With Dominance Items: An Empirical Comparison With Optimal Static Testing Under Different Desirability Matching. 带有优势项目的多维强制选择 CAT:在不同可取性匹配条件下与最佳静态测试的实证比较。
IF 2.1 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-04-01 Epub Date: 2022-03-07 DOI: 10.1177/00131644221077637
Yin Lin, Anna Brown, Paul Williams

Several forced-choice (FC) computerized adaptive tests (CATs) have emerged in the field of organizational psychology, all of them employing ideal-point items. However, despite most items developed historically follow dominance response models, research on FC CAT using dominance items is limited. Existing research is heavily dominated by simulations and lacking in empirical deployment. This empirical study trialed a FC CAT with dominance items described by the Thurstonian Item Response Theory model with research participants. This study investigated important practical issues such as the implications of adaptive item selection and social desirability balancing criteria on score distributions, measurement accuracy and participant perceptions. Moreover, nonadaptive but optimal tests of similar design were trialed alongside the CATs to provide a baseline for comparison, helping to quantify the return on investment when converting an otherwise-optimized static assessment into an adaptive one. Although the benefit of adaptive item selection in improving measurement precision was confirmed, results also indicated that at shorter test lengths CAT had no notable advantage compared with optimal static tests. Taking a holistic view incorporating both psychometric and operational considerations, implications for the design and deployment of FC assessments in research and practice are discussed.

组织心理学领域出现了几种强迫选择(FC)计算机化适应性测验(CAT),所有这些测验都采用了理想点项目。然而,尽管历史上开发的大多数项目都遵循优势反应模型,但使用优势项目的计算机适应性测试研究却十分有限。现有研究多以模拟为主,缺乏实证部署。本实证研究对采用 Thurstonian 项目反应理论模型描述的支配性项目的 FC CAT 进行了试用。本研究调查了一些重要的实际问题,如适应性项目选择和社会期望平衡标准对分数分布、测量准确性和参与者感知的影响。此外,还对类似设计的非适应性但最优的测试进行了试用,以提供一个基准线进行比较,帮助量化将其他优化的静态评估转换为适应性评估时的投资回报。虽然适应性项目选择在提高测量精确度方面的优势得到了证实,但结果也表明,在较短的测试时间内,CAT 与最佳静态测试相比没有明显优势。从心理测量和操作两方面综合考虑,讨论了在研究和实践中设计和部署功能增强测评的意义。
{"title":"Multidimensional Forced-Choice CAT With Dominance Items: An Empirical Comparison With Optimal Static Testing Under Different Desirability Matching.","authors":"Yin Lin, Anna Brown, Paul Williams","doi":"10.1177/00131644221077637","DOIUrl":"10.1177/00131644221077637","url":null,"abstract":"<p><p>Several forced-choice (FC) computerized adaptive tests (CATs) have emerged in the field of organizational psychology, all of them employing ideal-point items. However, despite most items developed historically follow dominance response models, research on FC CAT using dominance items is limited. Existing research is heavily dominated by simulations and lacking in empirical deployment. This empirical study trialed a FC CAT with dominance items described by the Thurstonian Item Response Theory model with research participants. This study investigated important practical issues such as the implications of adaptive item selection and social desirability balancing criteria on score distributions, measurement accuracy and participant perceptions. Moreover, nonadaptive but optimal tests of similar design were trialed alongside the CATs to provide a baseline for comparison, helping to quantify the return on investment when converting an otherwise-optimized static assessment into an adaptive one. Although the benefit of adaptive item selection in improving measurement precision was confirmed, results also indicated that at shorter test lengths CAT had no notable advantage compared with optimal static tests. Taking a holistic view incorporating both psychometric and operational considerations, implications for the design and deployment of FC assessments in research and practice are discussed.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 2","pages":"322-350"},"PeriodicalIF":2.1,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9972128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10823904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Range Restriction Affects Factor Analysis: Normality, Estimation, Fit, Loadings, and Reliability. 范围限制对因子分析的影响:正态性、估计、拟合、载荷和可靠性。
IF 2.7 3区 心理学 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-04-01 Epub Date: 2022-03-10 DOI: 10.1177/00131644221081867
Alicia Franco-Martínez, Jesús M Alvarado, Miguel A Sorrel

A sample suffers range restriction (RR) when its variance is reduced comparing with its population variance and, in turn, it fails representing such population. If the RR occurs over the latent factor, not directly over the observed variable, the researcher deals with an indirect RR, common when using convenience samples. This work explores how this problem affects different outputs of the factor analysis: multivariate normality (MVN), estimation process, goodness-of-fit, recovery of factor loadings, and reliability. In doing so, a Monte Carlo study was conducted. Data were generated following the linear selective sampling model, simulating tests varying their sample size ( N = 200 and 500 cases), test size ( J = 6, 12, 18, and 24 items), loading size ( L = .50, .70, and .90), and restriction size (from R = 1, .90, .80, and so on till .10 selection ratio). Our results systematically suggest that an interaction between decreasing the loading size and increasing the restriction size affects the MVN assessment, obstructs the estimation process, and leads to an underestimation of the factor loadings and reliability. However, most of the MVN tests and most of the fit indices employed were nonsensitive to the RR problem. We provide some recommendations to applied researchers.

当样本的方差与总体方差相比减小时,样本就会受到范围限制(RR),进而不能代表总体。如果 RR 发生在潜在因子上,而不是直接发生在观测变量上,研究人员就会遇到间接 RR,这在使用便利样本时很常见。本研究探讨了这一问题如何影响因子分析的不同结果:多元正态性(MVN)、估计过程、拟合优度、因子载荷恢复和可靠性。为此,我们进行了蒙特卡罗研究。数据是按照线性选择性抽样模型生成的,模拟了不同样本量(N = 200 和 500 个案例)、测试量(J = 6、12、18 和 24 个项目)、载荷量(L = .50、.70 和 .90)和限制量(从 R = 1、.90、.80 到 .10 的选择比率)的测试。我们的结果系统地表明,减小载荷大小和增大限制大小之间的相互作用会影响 MVN 评估,阻碍估算过程,并导致低估因子载荷和信度。然而,大多数 MVN 检验和采用的大多数拟合指数对 RR 问题并不敏感。我们为应用研究人员提供了一些建议。
{"title":"Range Restriction Affects Factor Analysis: Normality, Estimation, Fit, Loadings, and Reliability.","authors":"Alicia Franco-Martínez, Jesús M Alvarado, Miguel A Sorrel","doi":"10.1177/00131644221081867","DOIUrl":"10.1177/00131644221081867","url":null,"abstract":"<p><p>A sample suffers range restriction (RR) when its variance is reduced comparing with its population variance and, in turn, it fails representing such population. If the RR occurs over the latent factor, not directly over the observed variable, the researcher deals with an indirect RR, common when using convenience samples. This work explores how this problem affects different outputs of the factor analysis: multivariate normality (MVN), estimation process, goodness-of-fit, recovery of factor loadings, and reliability. In doing so, a Monte Carlo study was conducted. Data were generated following the linear selective sampling model, simulating tests varying their sample size ( <math><mrow><mi>N</mi></mrow> </math> = 200 and 500 cases), test size ( <math><mrow><mi>J</mi></mrow> </math> = 6, 12, 18, and 24 items), loading size ( <math><mrow><mi>L</mi></mrow> </math> = .50, .70, and .90), and restriction size (from <math><mrow><mi>R</mi></mrow> </math> = 1, .90, .80, and so on till .10 selection ratio). Our results systematically suggest that an interaction between decreasing the loading size and increasing the restriction size affects the MVN assessment, obstructs the estimation process, and leads to an underestimation of the factor loadings and reliability. However, most of the MVN tests and most of the fit indices employed were nonsensitive to the RR problem. We provide some recommendations to applied researchers.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 2","pages":"262-293"},"PeriodicalIF":2.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9972127/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10823909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Educational and Psychological Measurement
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1