首页 > 最新文献

Journal of Educational and Behavioral Statistics最新文献

英文 中文
Improving Balance in Educational Measurement: A Legacy of E. F. Lindquist 改善教育测量的平衡性:林奎斯特的遗产
IF 2.4 3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-01-07 DOI: 10.3102/10769986231218306
Daniel Koretz
A critically important balance in educational measurement between practical concerns and matters of technique has atrophied in recent decades, and as a result, some important issues in the field have not been adequately addressed. I start with the work of E. F. Lindquist, who exemplified the balance that is now wanting. Lindquist was arguably the most prolific developer of achievement tests in the history of the field and an accomplished statistician, but he nonetheless focused extensively on the practical limitations of testing and their implications for test development, test use, and inference. I describe the withering of this balance and discuss two pressing issues that have not been adequately addressed as a result: the lack of robustness of performance standards and score inflation. I conclude by discussing steps toward reestablishing the needed balance.
近几十年来,教育测量学在实际问题和技术问题之间失去了至关重要的平衡,因此,该领域的一些重要问题没有得到充分解决。我先从 E. F. Lindquist 的工作谈起,他是现在缺乏平衡的典范。林奎斯特可以说是该领域历史上最多产的成绩测验开发者,也是一位杰出的统计学家,但他仍然广泛关注测验的实际局限性及其对测验开发、测验使用和推断的影响。我描述了这种平衡的凋零,并讨论了因此而没有得到充分解决的两个紧迫问题:成绩标准缺乏稳健性和分数膨胀。最后,我将讨论重建必要平衡的步骤。
{"title":"Improving Balance in Educational Measurement: A Legacy of E. F. Lindquist","authors":"Daniel Koretz","doi":"10.3102/10769986231218306","DOIUrl":"https://doi.org/10.3102/10769986231218306","url":null,"abstract":"A critically important balance in educational measurement between practical concerns and matters of technique has atrophied in recent decades, and as a result, some important issues in the field have not been adequately addressed. I start with the work of E. F. Lindquist, who exemplified the balance that is now wanting. Lindquist was arguably the most prolific developer of achievement tests in the history of the field and an accomplished statistician, but he nonetheless focused extensively on the practical limitations of testing and their implications for test development, test use, and inference. I describe the withering of this balance and discuss two pressing issues that have not been adequately addressed as a result: the lack of robustness of performance standards and score inflation. I conclude by discussing steps toward reestablishing the needed balance.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"68 7","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139449121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Simple Technique Assessing Ordinal and Disordinal Interaction Effects 评估顺序和非顺序交互效应的简单技术
IF 2.4 3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-12-21 DOI: 10.3102/10769986231217472
Sang-June Park, Youjae Yi
Previous research explicates ordinal and disordinal interactions through the concept of the “crossover point.” This point is determined via simple regression models of a focal predictor at specific moderator values and signifies the intersection of these models. An interaction effect is labeled as disordinal (or ordinal) when the crossover point falls within (or outside) the observable range of the focal predictor. However, this approach might yield erroneous conclusions due to the crossover point’s intrinsic nature as a random variable defined by mean and variance. To statistically evaluate ordinal and disordinal interactions, a comparison between the observable range and the confidence interval (CI) of the crossover point is crucial. Numerous methods for establishing CIs, including reparameterization and bootstrap techniques, exist. Yet, these alternative methods are scarcely employed in social science journals for assessing ordinal and disordinal interactions. This note introduces a straightforward approach for calculating CIs, leveraging an extension of the Johnson–Neyman technique.
以往的研究通过 "交叉点 "的概念来解释顺序和非顺序的相互作用。交叉点是通过在特定调节因子值下的焦点预测因子的简单回归模型确定的,它标志着这些模型的交叉点。当交叉点位于焦点预测因子的可观测范围之内(或之外)时,交互作用效应就会被标记为不和谐(或顺序)。然而,由于交叉点是由均值和方差定义的随机变量,因此这种方法可能会得出错误的结论。要对顺序和非顺序交互作用进行统计评估,交叉点的可观测范围和置信区间(CI)之间的比较至关重要。建立置信区间的方法有很多,包括重参数化和引导技术。然而,在社会科学期刊中,这些替代方法很少被用于评估顺序和非顺序交互作用。本说明介绍了一种利用约翰逊-奈曼技术扩展计算 CI 的直接方法。
{"title":"A Simple Technique Assessing Ordinal and Disordinal Interaction Effects","authors":"Sang-June Park, Youjae Yi","doi":"10.3102/10769986231217472","DOIUrl":"https://doi.org/10.3102/10769986231217472","url":null,"abstract":"Previous research explicates ordinal and disordinal interactions through the concept of the “crossover point.” This point is determined via simple regression models of a focal predictor at specific moderator values and signifies the intersection of these models. An interaction effect is labeled as disordinal (or ordinal) when the crossover point falls within (or outside) the observable range of the focal predictor. However, this approach might yield erroneous conclusions due to the crossover point’s intrinsic nature as a random variable defined by mean and variance. To statistically evaluate ordinal and disordinal interactions, a comparison between the observable range and the confidence interval (CI) of the crossover point is crucial. Numerous methods for establishing CIs, including reparameterization and bootstrap techniques, exist. Yet, these alternative methods are scarcely employed in social science journals for assessing ordinal and disordinal interactions. This note introduces a straightforward approach for calculating CIs, leveraging an extension of the Johnson–Neyman technique.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"1 5","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138952204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparison of Latent Semantic Analysis and Latent Dirichlet Allocation in Educational Measurement 潜在语义分析与潜在德里希勒分配在教育测量中的比较
IF 2.4 3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-11-27 DOI: 10.3102/10769986231209446
Jordan M. Wheeler, Allan S. Cohen, Shiyu Wang
Topic models are mathematical and statistical models used to analyze textual data. The objective of topic models is to gain information about the latent semantic space of a set of related textual data. The semantic space of a set of textual data contains the relationship between documents and words and how they are used. Topic models are becoming more common in educational measurement research as a method for analyzing students’ responses to constructed-response items. Two popular topic models are latent semantic analysis (LSA) and latent Dirichlet allocation (LDA). LSA uses linear algebra techniques, whereas LDA uses an assumed statistical model and generative process. In educational measurement, LSA is often used in algorithmic scoring of essays due to its high reliability and agreement with human raters. LDA is often used as a supplemental analysis to gain additional information about students, such as their thinking and reasoning. This article reviews and compares the LSA and LDA topic models. This article also introduces a methodology for comparing the semantic spaces obtained by the two models and uses a simulation study to investigate their similarities.
主题模型是用于分析文本数据的数学和统计模型。主题模型的目的是获取一组相关文本数据的潜在语义空间的信息。一组文本数据的语义空间包含文档和单词之间的关系以及它们是如何被使用的。在教育测量研究中,主题模型作为一种分析学生对结构化答题项目的反应的方法,正变得越来越普遍。两种流行的主题模型是潜在语义分析(LSA)和潜在 Dirichlet 分配(LDA)。LSA 使用线性代数技术,而 LDA 则使用假定的统计模型和生成过程。在教育测量中,LSA 因其可靠性高且与人工评分者一致,常用于作文的算法评分。LDA 通常用作补充分析,以获取有关学生的其他信息,如他们的思维和推理能力。本文回顾并比较了 LSA 和 LDA 主题模型。本文还介绍了一种比较两种模型所得到的语义空间的方法,并使用模拟研究来探讨它们的相似性。
{"title":"A Comparison of Latent Semantic Analysis and Latent Dirichlet Allocation in Educational Measurement","authors":"Jordan M. Wheeler, Allan S. Cohen, Shiyu Wang","doi":"10.3102/10769986231209446","DOIUrl":"https://doi.org/10.3102/10769986231209446","url":null,"abstract":"Topic models are mathematical and statistical models used to analyze textual data. The objective of topic models is to gain information about the latent semantic space of a set of related textual data. The semantic space of a set of textual data contains the relationship between documents and words and how they are used. Topic models are becoming more common in educational measurement research as a method for analyzing students’ responses to constructed-response items. Two popular topic models are latent semantic analysis (LSA) and latent Dirichlet allocation (LDA). LSA uses linear algebra techniques, whereas LDA uses an assumed statistical model and generative process. In educational measurement, LSA is often used in algorithmic scoring of essays due to its high reliability and agreement with human raters. LDA is often used as a supplemental analysis to gain additional information about students, such as their thinking and reasoning. This article reviews and compares the LSA and LDA topic models. This article also introduces a methodology for comparing the semantic spaces obtained by the two models and uses a simulation study to investigate their similarities.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"30 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139231033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sample Size Calculation and Optimal Design for Multivariate Regression-Based Norming 基于多元回归规范的样本量计算和优化设计
IF 2.4 3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-11-22 DOI: 10.3102/10769986231210807
Francesco Innocenti, M. Candel, Frans E. S. Tan, Gerard J. P. van Breukelen
Normative studies are needed to obtain norms for comparing individuals with the reference population on relevant clinical or educational measures. Norms can be obtained in an efficient way by regressing the test score on relevant predictors, such as age and sex. When several measures are normed with the same sample, a multivariate regression-based approach must be adopted for at least two reasons: (1) to take into account the correlations between the measures of the same subject, in order to test certain scientific hypotheses and to reduce misclassification of subjects in clinical practice, and (2) to reduce the number of significance tests involved in selecting predictors for the purpose of norming, thus preventing the inflation of the type I error rate. A new multivariate regression-based approach is proposed that combines all measures for an individual through the Mahalanobis distance, thus providing an indicator of the individual’s overall performance. Furthermore, optimal designs for the normative study are derived under five multivariate polynomial regression models, assuming multivariate normality and homoscedasticity of the residuals, and efficient robust designs are presented in case of uncertainty about the correct model for the analysis of the normative sample. Sample size calculation formulas are provided for the new Mahalanobis distance-based approach. The results are illustrated with data from the Maastricht Aging Study (MAAS).
需要进行常模研究,以获得个人与参照人群在相关临床或教育测量方面的比较常模。通过对相关预测因素(如年龄和性别)对测试得分进行回归,可以有效地获得常模。在对同一样本的多个测量指标进行常模化时,必须采用基于多元回归的方法,原因至少有两个:(1) 考虑同一受试者的测量指标之间的相关性,以检验某些科学假设,并减少临床实践中对受试者的错误分类;(2) 减少为常模化目的而选择预测因子时所涉及的显著性检验次数,从而防止 I 类错误率的膨胀。本文提出了一种基于多元回归的新方法,通过马哈拉诺比斯距离(Mahalanobis distance)将个体的所有测量指标结合起来,从而提供个体整体表现的指标。此外,假定残差的多元正态性和同方差性,在五个多元多项式回归模型下得出了常模研究的最优设计,并在不确定常模样本分析的正确模型的情况下提出了高效稳健设计。为基于 Mahalanobis 距离的新方法提供了样本量计算公式。结果用马斯特里赫特老龄化研究(MAAS)的数据进行了说明。
{"title":"Sample Size Calculation and Optimal Design for Multivariate Regression-Based Norming","authors":"Francesco Innocenti, M. Candel, Frans E. S. Tan, Gerard J. P. van Breukelen","doi":"10.3102/10769986231210807","DOIUrl":"https://doi.org/10.3102/10769986231210807","url":null,"abstract":"Normative studies are needed to obtain norms for comparing individuals with the reference population on relevant clinical or educational measures. Norms can be obtained in an efficient way by regressing the test score on relevant predictors, such as age and sex. When several measures are normed with the same sample, a multivariate regression-based approach must be adopted for at least two reasons: (1) to take into account the correlations between the measures of the same subject, in order to test certain scientific hypotheses and to reduce misclassification of subjects in clinical practice, and (2) to reduce the number of significance tests involved in selecting predictors for the purpose of norming, thus preventing the inflation of the type I error rate. A new multivariate regression-based approach is proposed that combines all measures for an individual through the Mahalanobis distance, thus providing an indicator of the individual’s overall performance. Furthermore, optimal designs for the normative study are derived under five multivariate polynomial regression models, assuming multivariate normality and homoscedasticity of the residuals, and efficient robust designs are presented in case of uncertainty about the correct model for the analysis of the normative sample. Sample size calculation formulas are provided for the new Mahalanobis distance-based approach. The results are illustrated with data from the Maastricht Aging Study (MAAS).","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"106 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139249099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to Power Approximations for Overall Average Effects in Meta-Analysis With Dependent Effect Sizes 有依赖效应大小的 Meta 分析中总体平均效应的功率近似值》的更正
IF 2.4 3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-11-17 DOI: 10.3102/10769986231207878
{"title":"Corrigendum to Power Approximations for Overall Average Effects in Meta-Analysis With Dependent Effect Sizes","authors":"","doi":"10.3102/10769986231207878","DOIUrl":"https://doi.org/10.3102/10769986231207878","url":null,"abstract":"","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"144 2","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139266493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Human and Automated Scoring Methods in Experimental Assessments of Writing: A Case Study Tutorial 结合人类和自动评分方法在写作的实验评估:一个案例研究教程
3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-11-08 DOI: 10.3102/10769986231207886
Reagan Mozer, Luke Miratrix, Jackie Eunjung Relyea, James S. Kim
In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This process is both time and labor-intensive, which creates a persistent barrier for large-scale assessments of text. Furthermore, enriching one’s understanding of a found impact on text outcomes via secondary analyses can be difficult without additional scoring efforts. The purpose of this article is to provide a pipeline for using machine-based text analytic and data mining tools to augment traditional text-based impact analysis by analyzing impacts across an array of automatically generated text features. In this way, we can explore what an overall impact signifies in terms of how the text has evolved due to treatment. Through a case study based on a recent field trial in education, we show that machine learning can indeed enrich experimental evaluations of text by providing a more comprehensive and fine-grained picture of the mechanisms that lead to stronger argumentative writing in a first- and second-grade content literacy intervention. Relying exclusively on human scoring, by contrast, is a lost opportunity. Overall, the workflow and analytical strategy we describe can serve as a template for researchers interested in performing their own experimental evaluations of text.
在收集文本作为结果的随机试验中,评估治疗影响的传统方法要求每个文档首先由人类评分员手动编码感兴趣的结构。然后可以进行影响分析,比较治疗组和对照组,使用手工编码的分数作为测量结果。这个过程既费时又费力,这给大规模的文本评估造成了持续的障碍。此外,如果没有额外的评分工作,通过二次分析来丰富一个人对文本结果的发现影响的理解可能是困难的。本文的目的是为使用基于机器的文本分析和数据挖掘工具提供一个管道,通过分析一系列自动生成的文本特征的影响来增强传统的基于文本的影响分析。通过这种方式,我们可以探索文本如何因处理而演变的整体影响意味着什么。通过一个基于最近教育领域试验的案例研究,我们表明,机器学习确实可以丰富文本的实验评估,提供更全面、更细致的机制图片,从而在一年级和二年级的内容读写干预中提高议论文写作能力。相比之下,完全依靠人工评分就失去了机会。总的来说,我们描述的工作流程和分析策略可以作为对文本进行自己的实验评估感兴趣的研究人员的模板。
{"title":"Combining Human and Automated Scoring Methods in Experimental Assessments of Writing: A Case Study Tutorial","authors":"Reagan Mozer, Luke Miratrix, Jackie Eunjung Relyea, James S. Kim","doi":"10.3102/10769986231207886","DOIUrl":"https://doi.org/10.3102/10769986231207886","url":null,"abstract":"In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This process is both time and labor-intensive, which creates a persistent barrier for large-scale assessments of text. Furthermore, enriching one’s understanding of a found impact on text outcomes via secondary analyses can be difficult without additional scoring efforts. The purpose of this article is to provide a pipeline for using machine-based text analytic and data mining tools to augment traditional text-based impact analysis by analyzing impacts across an array of automatically generated text features. In this way, we can explore what an overall impact signifies in terms of how the text has evolved due to treatment. Through a case study based on a recent field trial in education, we show that machine learning can indeed enrich experimental evaluations of text by providing a more comprehensive and fine-grained picture of the mechanisms that lead to stronger argumentative writing in a first- and second-grade content literacy intervention. Relying exclusively on human scoring, by contrast, is a lost opportunity. Overall, the workflow and analytical strategy we describe can serve as a template for researchers interested in performing their own experimental evaluations of text.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"159 8‐10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135393035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Level Adaptive Test Battery 双电平自适应测试电池
3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-11-06 DOI: 10.3102/10769986231209447
Wim J. van der Linden, Luping Niu, Seung W. Choi
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint distribution of their abilities. The presentation of the model is followed by an optimized MCMC algorithm to update the posterior distribution of each of its ability parameters, select the items to Bayesian optimality, and adaptively move from one subtest to the next. Thanks to extremely rapid convergence of the Markov chain and simple posterior calculations, the algorithm can be used in real-world applications without any noticeable latency. Finally, an empirical study with a battery of short diagnostic subtests is shown to yield score accuracies close to traditional one-level adaptive testing with subtests of double lengths.
提出了一个具有两种不同适应水平的测试组:子测试内水平用于选择子测试中的项目,子测试间水平用于从一个子测试移动到下一个子测试。电池组在一个两级模型上运行,该模型由每个子测试的规则响应模型组成,并扩展了用于其能力联合分布的第二级模型。在给出模型之后,采用优化的MCMC算法更新模型各能力参数的后验分布,选择贝叶斯最优的项目,并自适应地从一个子测试移动到下一个子测试。由于马尔可夫链的快速收敛和简单的后验计算,该算法可以在实际应用中使用,没有任何明显的延迟。最后,一组短诊断子测试的实证研究表明,其得分准确性接近传统的双长度子测试的单水平自适应测试。
{"title":"A Two-Level Adaptive Test Battery","authors":"Wim J. van der Linden, Luping Niu, Seung W. Choi","doi":"10.3102/10769986231209447","DOIUrl":"https://doi.org/10.3102/10769986231209447","url":null,"abstract":"A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint distribution of their abilities. The presentation of the model is followed by an optimized MCMC algorithm to update the posterior distribution of each of its ability parameters, select the items to Bayesian optimality, and adaptively move from one subtest to the next. Thanks to extremely rapid convergence of the Markov chain and simple posterior calculations, the algorithm can be used in real-world applications without any noticeable latency. Finally, an empirical study with a battery of short diagnostic subtests is shown to yield score accuracies close to traditional one-level adaptive testing with subtests of double lengths.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"43 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135681275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing Polytomous Test Data: A Comparison Between an Information-Based IRT Model and the Generalized Partial Credit Model 多重测试数据分析:基于信息的IRT模型与广义部分信用模型的比较
3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-11-06 DOI: 10.3102/10769986231207879
Joakim Wallmark, James O. Ramsay, Juan Li, Marie Wiberg
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker’s attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of information theory, and the generalized partial credit (GPC) model, a widely used parametric alternative. We evaluate these models using both simulated and real test data. In the real data examples, the OS model demonstrates superior model fit compared to the GPC model across all analyzed datasets. In our simulation study, the OS model outperforms the GPC model in terms of bias, but at the cost of larger standard errors for the probabilities along the estimated item response functions. Furthermore, we illustrate how surprisal arc length, an IRT scale invariant measure of ability with metric properties, can be used to put scores from vastly different types of IRT models on a common scale. We also demonstrate how arc length can be a viable alternative to sum scores for scoring test takers.
项目反应理论(IRT)建立了一个测试项目的可能得分与被试者对该项目所要测量的潜在特质的实现之间的关系模型。在这项研究中,我们比较了两种具有多分项的测试模型:最优评分(OS)模型,一种基于信息论原理的非参数IRT模型,和广义部分信用(GPC)模型,一种广泛使用的参数替代模型。我们使用模拟和真实的测试数据来评估这些模型。在实际数据示例中,在所有分析的数据集上,与GPC模型相比,OS模型显示出更好的模型拟合。在我们的模拟研究中,OS模型在偏差方面优于GPC模型,但代价是沿估计的项目响应函数的概率有较大的标准误差。此外,我们说明了惊奇弧长,一个具有度量属性的IRT尺度不变的能力度量,可以用来将来自不同类型的IRT模型的分数放在一个共同的尺度上。我们还演示了弧长如何可以是一个可行的替代得分的总和考生得分。
{"title":"Analyzing Polytomous Test Data: A Comparison Between an Information-Based IRT Model and the Generalized Partial Credit Model","authors":"Joakim Wallmark, James O. Ramsay, Juan Li, Marie Wiberg","doi":"10.3102/10769986231207879","DOIUrl":"https://doi.org/10.3102/10769986231207879","url":null,"abstract":"Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker’s attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of information theory, and the generalized partial credit (GPC) model, a widely used parametric alternative. We evaluate these models using both simulated and real test data. In the real data examples, the OS model demonstrates superior model fit compared to the GPC model across all analyzed datasets. In our simulation study, the OS model outperforms the GPC model in terms of bias, but at the cost of larger standard errors for the probabilities along the estimated item response functions. Furthermore, we illustrate how surprisal arc length, an IRT scale invariant measure of ability with metric properties, can be used to put scores from vastly different types of IRT models on a common scale. We also demonstrate how arc length can be a viable alternative to sum scores for scoring test takers.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"43 19","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135681661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to JEBS Special Issue on Diagnostic Statistical Models 关于诊断统计模型的JEBS特刊介绍
3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-10-26 DOI: 10.3102/10769986231210002
Steven Andrew Culpepper, Gongjun Xu
The COVID-19 pandemic forced millions of students to transition from traditional in-person instruction into a learning environment that incorporates facets of social distancing and online education (National Center for Education Statistics, 2022). One consequence is that the massive disruption of the COVID-19 health crisis is related to the largest declines in elementary and secondary students’ educational achievement as inferred from recent results of the National Assessment of Educational Progress long-term trend (U.S. Department of Education, 2022). Accordingly, recent events have raised awareness of the need for robust formative assessments to accelerate learning and improve educational and behavioral outcomes. The
{"title":"Introduction to <i>JEBS</i> Special Issue on Diagnostic Statistical Models","authors":"Steven Andrew Culpepper, Gongjun Xu","doi":"10.3102/10769986231210002","DOIUrl":"https://doi.org/10.3102/10769986231210002","url":null,"abstract":"The COVID-19 pandemic forced millions of students to transition from traditional in-person instruction into a learning environment that incorporates facets of social distancing and online education (National Center for Education Statistics, 2022). One consequence is that the massive disruption of the COVID-19 health crisis is related to the largest declines in elementary and secondary students’ educational achievement as inferred from recent results of the National Assessment of Educational Progress long-term trend (U.S. Department of Education, 2022). Accordingly, recent events have raised awareness of the need for robust formative assessments to accelerate learning and improve educational and behavioral outcomes. The","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"41 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134908866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pairwise Regression Weight Contrasts: Models for Allocating Psychological Resources 两两回归权重对比:心理资源分配模型
3区 心理学 Q2 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-10-13 DOI: 10.3102/10769986231200155
Mark L. Davison, Hao Jia, Ernest C. Davenport
Researchers examine contrasts between analysis of variance (ANOVA) effects but seldom contrasts between regression coefficients even though such coefficients are an ANOVA generalization. Regression weight contrasts can be analyzed by reparameterizing the linear model. Two pairwise contrast models are developed for the study of qualitative differences among predictors. One leads to tests of null hypotheses that the regression weight for a reference predictor equals each of the other weights. The second involves ordered predictors and null hypotheses that the weight for a predictor equals that for the variables just above or below in the ordering. As illustration, qualitative differences in high school math course content are related to math achievement. The models facilitate the study of qualitative differences among predictors and the allocation of resources. They also readily generalize to moderated, hierarchical, and generalized linear forms.
研究人员检查了方差分析(ANOVA)效应之间的对比,但很少检查回归系数之间的对比,即使这些系数是方差分析的泛化。回归权重对比可以通过重新参数化线性模型来分析。为研究预测因子之间的质性差异,开发了两个两两对比模型。其中一个导致零假设的检验,即参考预测器的回归权重等于其他权重。第二种涉及有序预测器和零假设,即预测器的权重等于排序中上下变量的权重。例如,高中数学课程内容的质性差异与数学成绩有关。这些模型有助于研究预测者之间的质的差异和资源的分配。它们也很容易推广到适度的、分层的和广义的线性形式。
{"title":"Pairwise Regression Weight Contrasts: Models for Allocating Psychological Resources","authors":"Mark L. Davison, Hao Jia, Ernest C. Davenport","doi":"10.3102/10769986231200155","DOIUrl":"https://doi.org/10.3102/10769986231200155","url":null,"abstract":"Researchers examine contrasts between analysis of variance (ANOVA) effects but seldom contrasts between regression coefficients even though such coefficients are an ANOVA generalization. Regression weight contrasts can be analyzed by reparameterizing the linear model. Two pairwise contrast models are developed for the study of qualitative differences among predictors. One leads to tests of null hypotheses that the regression weight for a reference predictor equals each of the other weights. The second involves ordered predictors and null hypotheses that the weight for a predictor equals that for the variables just above or below in the ordering. As illustration, qualitative differences in high school math course content are related to math achievement. The models facilitate the study of qualitative differences among predictors and the allocation of resources. They also readily generalize to moderated, hierarchical, and generalized linear forms.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135859026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Educational and Behavioral Statistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1