首页 > 最新文献

Journal of Educational Measurement最新文献

英文 中文
Measuring the Accuracy of True Score Predictions for AI Scoring Evaluation 测量AI评分评估中真实分数预测的准确性
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-10-12 DOI: 10.1111/jedm.70011
Daniel F. McCaffrey, Jodi M. Casabianca, Matthew S. Johnson

Use of artificial intelligence (AI) to score responses is growing in popularity and likely to increase. Evidence of the validity of scores relies on quadratic weighted kappa (QWK) to demonstrate agreement between AI scores and human ratings. QWK is a measure of agreement that accounts for chance agreement and the ordinality of the data by giving greater weight to larger disagreements. It has known shortcomings including sensitivity to the human rating reliability. The proportional reduction in mean squared error (PRMSE) measures agreement between predictions and their target that accounts for measurement error in the target. For example, the accuracy of the automated scoring model, with respect to prediction of the human true scores rather than the observed ratings. Extensive simulation study results show PRMSE is robust to many factors to which QWK is sensitive such as the human rater reliability, skew in the data and the number of score points. Analysis of operational test data demonstrates QWK and PRMSE can lead to different conclusions about AI scores. We investigate sample size requirements for accurate estimation of PRMSE in the context of AI scoring, although the results could apply more generally to measures with similar distributions as those tested in our study.

使用人工智能(AI)对回答进行评分越来越受欢迎,并且可能会增加。分数有效性的证据依赖于二次加权卡帕(QWK)来证明人工智能分数和人类评分之间的一致性。QWK是一种一致性的度量,它通过对较大的分歧给予更大的权重来说明偶然一致性和数据的平常性。它有已知的缺点,包括对人类评级可靠性的敏感性。均方误差(PRMSE)的比例减少度量了预测与目标之间的一致性,这说明了目标中的测量误差。例如,自动评分模型的准确性,相对于人类真实得分的预测,而不是观察到的评分。大量的仿真研究结果表明,PRMSE对QWK敏感的人为评分可靠性、数据偏差和分数点数等因素具有较强的鲁棒性。对运行测试数据的分析表明,QWK和PRMSE可以得出关于AI分数的不同结论。我们研究了在人工智能评分背景下准确估计PRMSE的样本量要求,尽管结果可以更普遍地适用于与我们研究中测试的分布相似的测量。
{"title":"Measuring the Accuracy of True Score Predictions for AI Scoring Evaluation","authors":"Daniel F. McCaffrey,&nbsp;Jodi M. Casabianca,&nbsp;Matthew S. Johnson","doi":"10.1111/jedm.70011","DOIUrl":"https://doi.org/10.1111/jedm.70011","url":null,"abstract":"<p>Use of artificial intelligence (AI) to score responses is growing in popularity and likely to increase. Evidence of the validity of scores relies on quadratic weighted kappa (QWK) to demonstrate agreement between AI scores and human ratings. QWK is a measure of agreement that accounts for chance agreement and the ordinality of the data by giving greater weight to larger disagreements. It has known shortcomings including sensitivity to the human rating reliability. The proportional reduction in mean squared error (PRMSE) measures agreement between predictions and their target that accounts for measurement error in the target. For example, the accuracy of the automated scoring model, with respect to prediction of the human true scores rather than the observed ratings. Extensive simulation study results show PRMSE is robust to many factors to which QWK is sensitive such as the human rater reliability, skew in the data and the number of score points. Analysis of operational test data demonstrates QWK and PRMSE can lead to different conclusions about AI scores. We investigate sample size requirements for accurate estimation of PRMSE in the context of AI scoring, although the results could apply more generally to measures with similar distributions as those tested in our study.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"763-786"},"PeriodicalIF":1.6,"publicationDate":"2025-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Phase Content-Balancing CD-CAT Online Item Calibration 两阶段内容平衡CD-CAT在线项目校准
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-10-08 DOI: 10.1111/jedm.70012
Jing Huang, Yuxiao Zhang, Jason W. Morphew, Jayson M. Nissen, Ben Van Dusen, Hua Hua Chang

Online calibration estimates new item parameters alongside previously calibrated items, supporting efficient item replenishment. However, most existing online calibration procedures for Cognitive Diagnostic Computerized Adaptive Testing (CD-CAT) lack mechanisms to ensure content balance during live testing. This limitation can lead to uneven content coverage, potentially undermining the alignment with instructional goals. This research extends the current calibration framework by integrating a two-phase test design with a content-balancing item selection method into the online calibration procedure. Simulation studies evaluated item parameter recovery and attribute profile estimation accuracy under the proposed procedure. Results indicated that the developed procedure yielded more accurate new item parameter estimates. The procedure also maintained content representativeness under both balanced and unbalanced constraints. Attribute profile estimation was sensitive to item parameter values. Accuracy declined when items had larger parameter values. Calibration improved with larger sample sizes and smaller parameter values. Longer test lengths contributed more to profile estimation than to new item calibration. These findings highlight design trade-offs in adaptive item replenishment and suggest new directions for hybrid calibration methods.

在线校准估计新的项目参数和以前校准的项目,支持有效的项目补充。然而,大多数现有的认知诊断计算机自适应测试(CD-CAT)在线校准程序缺乏确保实时测试期间内容平衡的机制。这种限制可能导致内容覆盖不均匀,潜在地破坏与教学目标的一致性。本研究通过将两阶段测试设计与内容平衡项目选择方法集成到在线校准程序中,扩展了当前的校准框架。仿真研究评估了该方法下的项目参数恢复和属性轮廓估计精度。结果表明,开发的程序产生了更准确的新项目参数估计。该程序在平衡约束和不平衡约束下均保持了内容代表性。属性轮廓估计对项目参数值敏感。当项目参数值较大时,准确性下降。更大的样本量和更小的参数值改进了校准。较长的测试长度对轮廓估计的贡献大于对新项目校准的贡献。这些发现突出了自适应项目补充的设计权衡,并为混合校准方法提出了新的方向。
{"title":"Two-Phase Content-Balancing CD-CAT Online Item Calibration","authors":"Jing Huang,&nbsp;Yuxiao Zhang,&nbsp;Jason W. Morphew,&nbsp;Jayson M. Nissen,&nbsp;Ben Van Dusen,&nbsp;Hua Hua Chang","doi":"10.1111/jedm.70012","DOIUrl":"https://doi.org/10.1111/jedm.70012","url":null,"abstract":"<p>Online calibration estimates new item parameters alongside previously calibrated items, supporting efficient item replenishment. However, most existing online calibration procedures for Cognitive Diagnostic Computerized Adaptive Testing (CD-CAT) lack mechanisms to ensure content balance during live testing. This limitation can lead to uneven content coverage, potentially undermining the alignment with instructional goals. This research extends the current calibration framework by integrating a two-phase test design with a content-balancing item selection method into the online calibration procedure. Simulation studies evaluated item parameter recovery and attribute profile estimation accuracy under the proposed procedure. Results indicated that the developed procedure yielded more accurate new item parameter estimates. The procedure also maintained content representativeness under both balanced and unbalanced constraints. Attribute profile estimation was sensitive to item parameter values. Accuracy declined when items had larger parameter values. Calibration improved with larger sample sizes and smaller parameter values. Longer test lengths contributed more to profile estimation than to new item calibration. These findings highlight design trade-offs in adaptive item replenishment and suggest new directions for hybrid calibration methods.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"787-808"},"PeriodicalIF":1.6,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.70012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IRT Scoring and Recursion for Estimating Reliability and Other Accuracy Indices IRT评分和递归估计可靠性和其他准确性指标
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-09-28 DOI: 10.1111/jedm.70008
Tim Moses, YoungKoung Kim

This study considers the estimation of marginal reliability and conditional accuracy measures using a generalized recursion procedure with several IRT-based ability and score estimators. The estimators include MLE, TCC, and EAP abilities, and corresponding test scores obtained with different weightings of the item scores. We consider reliability estimates for 1-, 2-, and 3-parameter logistic IRT models (1PL, 2PL, and 3PL) for tests of dichotomously scored items, using IRT calibrations from two datasets. The generalized recursion procedure is shown to produce conditional probability distributions for the considered IRT estimators that can be used in the estimation of marginal reliabilities and conditional accuracies (biases and CSEMs). These reliabilities and conditional accuracies are shown to have less extreme and more plausible values compared to theoretical approaches based on test information. The proposed recursion procedure for the estimation of reliability and other accuracy measures are demonstrated for testing situations involving different test lengths, IRT models, and different types of IRT parameter inaccuracies.

本研究使用广义递归方法与若干基于irt的能力和分数估计器来估计边际可靠性和条件精度。估计量包括MLE、TCC和EAP能力,以及相应的测试分数,这些分数是通过项目分数的不同权重得到的。我们使用来自两个数据集的IRT校准,对二分类得分项目的测试考虑1、2和3参数logistic IRT模型(1PL、2PL和3PL)的可靠性估计。广义递归过程显示为考虑的IRT估计器产生条件概率分布,可用于估计边际可靠性和条件精度(偏差和cems)。与基于测试信息的理论方法相比,这些可靠性和条件准确性显示出不那么极端和更可信的值。在涉及不同测试长度、IRT模型和不同类型IRT参数不准确性的测试情况下,演示了所提出的用于估计可靠性和其他精度度量的递归过程。
{"title":"IRT Scoring and Recursion for Estimating Reliability and Other Accuracy Indices","authors":"Tim Moses,&nbsp;YoungKoung Kim","doi":"10.1111/jedm.70008","DOIUrl":"https://doi.org/10.1111/jedm.70008","url":null,"abstract":"<p>This study considers the estimation of marginal reliability and conditional accuracy measures using a generalized recursion procedure with several IRT-based ability and score estimators. The estimators include MLE, TCC, and EAP abilities, and corresponding test scores obtained with different weightings of the item scores. We consider reliability estimates for 1-, 2-, and 3-parameter logistic IRT models (1PL, 2PL, and 3PL) for tests of dichotomously scored items, using IRT calibrations from two datasets. The generalized recursion procedure is shown to produce conditional probability distributions for the considered IRT estimators that can be used in the estimation of marginal reliabilities and conditional accuracies (biases and CSEMs). These reliabilities and conditional accuracies are shown to have less extreme and more plausible values compared to theoretical approaches based on test information. The proposed recursion procedure for the estimation of reliability and other accuracy measures are demonstrated for testing situations involving different test lengths, IRT models, and different types of IRT parameter inaccuracies.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"718-739"},"PeriodicalIF":1.6,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Item Estimates to Test Operations: The Cascading Effect of Rapid Guessing 从项目估计到测试操作:快速猜测的级联效应
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-09-28 DOI: 10.1111/jedm.70010
Sarah Alahmadi, Christine E. DeMars

Inadequate test-taking effort poses a significant challenge, particularly when low-stakes test results inform high-stakes policy and psychometric decisions. We examined how rapid guessing (RG), a common form of low test-taking effort, biases item parameter estimates, particularly the discrimination and difficulty parameters. Previous research reported conflicting findings on the direction of bias and what contributes to it. Using simulated data that replicate real-world, low-stakes testing conditions, this study reconciles the inconsistencies by identifying the conditions under which item parameters are over- or underestimated. Bias is influenced by item-related factors (true parameter values and the number of RG responses the items receive) and examinee-related factors (proficiency differences between rapid guessers and non-rapid guessers, the variability in RG behavior among rapid guessers, and the pattern of RG responses throughout the test). The findings highlight that ignoring RG not only distorts proficiency estimates but may also impact broader test operations, including adaptive testing, equating, and standard setting. By demonstrating the potential far-reaching effects of RG, we underline the need for testing professionals to implement methods that mitigate RG's impact (such as motivation filtering) to protect the integrity of their psychometric work.

不充分的应试努力构成了重大挑战,特别是当低风险的测试结果为高风险的政策和心理测量决策提供信息时。我们研究了快速猜测(RG),一种常见的低测试努力形式,如何偏差项目参数估计,特别是辨别和难度参数。先前的研究报告了关于偏见的方向和导致偏见的原因的相互矛盾的发现。使用模拟数据来复制真实世界,低风险的测试条件,本研究通过确定项目参数被高估或低估的条件来调和不一致性。偏见受项目相关因素(真实参数值和项目收到的RG反应数量)和考生相关因素(快速猜测者和非快速猜测者之间的熟练程度差异、快速猜测者之间RG行为的可变性以及整个测试过程中RG反应的模式)的影响。研究结果强调,忽略RG不仅扭曲了熟练程度估计,而且可能影响更广泛的测试操作,包括适应性测试、等同和标准设置。通过展示RG潜在的深远影响,我们强调测试专业人员需要实施减轻RG影响的方法(如动机过滤),以保护他们心理测量工作的完整性。
{"title":"From Item Estimates to Test Operations: The Cascading Effect of Rapid Guessing","authors":"Sarah Alahmadi,&nbsp;Christine E. DeMars","doi":"10.1111/jedm.70010","DOIUrl":"https://doi.org/10.1111/jedm.70010","url":null,"abstract":"<p>Inadequate test-taking effort poses a significant challenge, particularly when low-stakes test results inform high-stakes policy and psychometric decisions. We examined how rapid guessing (RG), a common form of low test-taking effort, biases item parameter estimates, particularly the discrimination and difficulty parameters. Previous research reported conflicting findings on the direction of bias and what contributes to it. Using simulated data that replicate real-world, low-stakes testing conditions, this study reconciles the inconsistencies by identifying the conditions under which item parameters are over- or underestimated. Bias is influenced by item-related factors (true parameter values and the number of RG responses the items receive) and examinee-related factors (proficiency differences between rapid guessers and non-rapid guessers, the variability in RG behavior among rapid guessers, and the pattern of RG responses throughout the test). The findings highlight that ignoring RG not only distorts proficiency estimates but may also impact broader test operations, including adaptive testing, equating, and standard setting. By demonstrating the potential far-reaching effects of RG, we underline the need for testing professionals to implement methods that mitigate RG's impact (such as motivation filtering) to protect the integrity of their psychometric work.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"740-762"},"PeriodicalIF":1.6,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.70010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145754592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special Issue: Adaptive Testing in Large-Scale Assessments 特刊:大规模评估中的适应性测试
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-09-25 DOI: 10.1111/jedm.70009
Peter van Rijn, Francesco Avvisati
{"title":"Special Issue: Adaptive Testing in Large-Scale Assessments","authors":"Peter van Rijn,&nbsp;Francesco Avvisati","doi":"10.1111/jedm.70009","DOIUrl":"https://doi.org/10.1111/jedm.70009","url":null,"abstract":"","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 3","pages":"385-391"},"PeriodicalIF":1.6,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145341777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Precision and Bias of Cut Score Estimates from the Beuk Standard Setting Method Beuk标准设定法切割分数估计的精度和偏差
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-09-09 DOI: 10.1111/jedm.70007
Joseph H. Grochowalski, Lei Wan, Lauren Molin, Amy H. Hendrickson

The Beuk standard setting method derives cut scores through expert judgment that balances content and normative perspectives. This study developed a method to estimate confidence intervals for Beuk settings and assessed their accuracy via simulations. Simulations varied SME panel size, expert agreement, cut score locations, score distributions, and decision alignment. Panels with 20+ participants provided precise and accurate cut score estimates if strongly agreed upon. Larger panels did not improve precision significantly. Cut score location influenced confidence interval widths, highlighting its importance in planning. Real data showed SME disagreement increased bias and variance of Beuk estimates. Use Beuk cut scores cautiously with small panels, flat score distributions, or significant expert disagreement.

Beuk的标准制定方法是通过平衡内容和规范观点的专家判断得出分数。本研究开发了一种方法来估计Beuk设置的置信区间,并通过模拟评估其准确性。模拟改变了中小企业小组的规模、专家的一致意见、分值位置、分值分布和决策一致性。20+参与者的小组提供了精确和准确的切割分数估计,如果强烈同意。较大的面板并没有显著提高精度。切分位置影响置信区间宽度,突出了其在规划中的重要性。实际数据显示,中小企业的分歧增加了Beuk估计的偏差和方差。谨慎地使用Beuk cut分数与小面板,平坦的分数分布,或显著的专家分歧。
{"title":"The Precision and Bias of Cut Score Estimates from the Beuk Standard Setting Method","authors":"Joseph H. Grochowalski,&nbsp;Lei Wan,&nbsp;Lauren Molin,&nbsp;Amy H. Hendrickson","doi":"10.1111/jedm.70007","DOIUrl":"https://doi.org/10.1111/jedm.70007","url":null,"abstract":"<p>The Beuk standard setting method derives cut scores through expert judgment that balances content and normative perspectives. This study developed a method to estimate confidence intervals for Beuk settings and assessed their accuracy via simulations. Simulations varied SME panel size, expert agreement, cut score locations, score distributions, and decision alignment. Panels with 20+ participants provided precise and accurate cut score estimates if strongly agreed upon. Larger panels did not improve precision significantly. Cut score location influenced confidence interval widths, highlighting its importance in planning. Real data showed SME disagreement increased bias and variance of Beuk estimates. Use Beuk cut scores cautiously with small panels, flat score distributions, or significant expert disagreement.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"687-717"},"PeriodicalIF":1.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simultaneous Detection of Cheaters and Compromised Items Using a Biclustering Approach 使用双聚类方法同时检测作弊者和受损物品
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-09-08 DOI: 10.1111/jedm.70004
Hyeryung Lee, Walter P. Vispoel

Traditional methods for detecting cheating on assessments tend to focus on either identifying cheaters or compromised items in isolation, overlooking their interconnection. In this study, we present a novel biclustering approach that simultaneously detects both cheaters and compromised items by identifying coherent subgroups of examinees and items exhibiting suspicious response patterns. To identify these patterns, our method leverages response accuracy, response time, and distractor choice data. We evaluated the approach on real datasets and compared its performance with existing detection approaches. Additionally, a comprehensive simulation study was conducted, modeling a variety of realistic cheating scenarios such as answer copying, pre-knowledge of test items, and distinct forms of rapid guessing. Our findings revealed that the biclustering method outperformed previous methods in simultaneously distinguishing cheating and non-cheating behaviors within the empirical study. The simulation analyses further revealed the conditions under which the biclustering approach was most effective in both regards. Overall, the findings underscore the flexibility of biclustering and its adaptability in enhancing test security within diverse testing environments.

检测评估作弊的传统方法往往侧重于识别作弊者或孤立的受损项目,而忽略了它们之间的相互联系。在这项研究中,我们提出了一种新的双聚类方法,通过识别连贯的考生亚组和表现出可疑反应模式的项目,同时检测作弊者和受损项目。为了识别这些模式,我们的方法利用了响应准确性、响应时间和干扰选择数据。我们在真实数据集上评估了该方法,并将其与现有检测方法的性能进行了比较。此外,我们还进行了一项全面的模拟研究,模拟了各种现实的作弊场景,如答案抄袭、测试项目的预先知识和不同形式的快速猜测。我们的研究结果表明,在实证研究中,双聚类方法在同时区分作弊和非作弊行为方面优于以往的方法。仿真分析进一步揭示了双聚类方法在两方面都最有效的条件。总的来说,研究结果强调了双集群的灵活性及其在不同测试环境中增强测试安全性的适应性。
{"title":"Simultaneous Detection of Cheaters and Compromised Items Using a Biclustering Approach","authors":"Hyeryung Lee,&nbsp;Walter P. Vispoel","doi":"10.1111/jedm.70004","DOIUrl":"https://doi.org/10.1111/jedm.70004","url":null,"abstract":"<p>Traditional methods for detecting cheating on assessments tend to focus on either identifying cheaters or compromised items in isolation, overlooking their interconnection. In this study, we present a novel biclustering approach that simultaneously detects both cheaters and compromised items by identifying coherent subgroups of examinees and items exhibiting suspicious response patterns. To identify these patterns, our method leverages response accuracy, response time, and distractor choice data. We evaluated the approach on real datasets and compared its performance with existing detection approaches. Additionally, a comprehensive simulation study was conducted, modeling a variety of realistic cheating scenarios such as answer copying, pre-knowledge of test items, and distinct forms of rapid guessing. Our findings revealed that the biclustering method outperformed previous methods in simultaneously distinguishing cheating and non-cheating behaviors within the empirical study. The simulation analyses further revealed the conditions under which the biclustering approach was most effective in both regards. Overall, the findings underscore the flexibility of biclustering and its adaptability in enhancing test security within diverse testing environments.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"608-638"},"PeriodicalIF":1.6,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification Consistency and Accuracy Indices for Simple Structure MIRT Model 简单结构MIRT模型的分类一致性和精度指标
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-09-04 DOI: 10.1111/jedm.70006
Huan Liu, Won-Chan Lee

This study investigates the estimation of classification consistency and accuracy indices for composite summed and theta scores within the SS-MIRT framework, using five popular approaches, including the Lee, Rudner, Guo, Bayesian EAP, and Bayesian MCMC approaches. The procedures are illustrated through analysis of two real datasets and further evaluated via a simulation study under various conditions. Overall, results indicated that all five approaches performed well, producing classification indices estimates that were highly consistent in both magnitude and pattern. However, the results also indicated that factors such as the ability estimator, score metric, and cut score location can significantly influence estimation outcomes. Consequently, these considerations should guide practitioners in selecting the most appropriate estimation approach for their specific assessment context.

本研究研究了SS-MIRT框架内综合求和和theta分数的分类一致性和准确性指标的估计,使用了五种流行的方法,包括Lee, Rudner, Guo,贝叶斯EAP和贝叶斯MCMC方法。通过对两个真实数据集的分析说明了这些过程,并通过各种条件下的模拟研究进一步进行了评估。总体而言,结果表明所有五种方法都表现良好,产生的分类指数估计在大小和模式上都高度一致。然而,结果也表明,诸如能力估计器、分数度量和分数切割位置等因素可以显著影响估计结果。因此,这些考虑应该指导从业者为他们具体的评估环境选择最合适的评估方法。
{"title":"Classification Consistency and Accuracy Indices for Simple Structure MIRT Model","authors":"Huan Liu,&nbsp;Won-Chan Lee","doi":"10.1111/jedm.70006","DOIUrl":"https://doi.org/10.1111/jedm.70006","url":null,"abstract":"<p>This study investigates the estimation of classification consistency and accuracy indices for composite summed and theta scores within the SS-MIRT framework, using five popular approaches, including the Lee, Rudner, Guo, Bayesian EAP, and Bayesian MCMC approaches. The procedures are illustrated through analysis of two real datasets and further evaluated via a simulation study under various conditions. Overall, results indicated that all five approaches performed well, producing classification indices estimates that were highly consistent in both magnitude and pattern. However, the results also indicated that factors such as the ability estimator, score metric, and cut score location can significantly influence estimation outcomes. Consequently, these considerations should guide practitioners in selecting the most appropriate estimation approach for their specific assessment context.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"663-686"},"PeriodicalIF":1.6,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.70006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Sets of Initial Values Method for MLE-EM and Its Variants in Cognitive Diagnosis Models 认知诊断模型中MLE-EM及其变体的多初始值集方法
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-09-01 DOI: 10.1111/jedm.70005
Yue Zhao, Yuerong Wu, Yanlou Liu, Tao Xin, Yiming Wang

Cognitive diagnosis models (CDMs) are widely used to assess individuals’ latent characteristics, offering detailed diagnostic insights for tailored instructional development. Maximum likelihood estimation using the expectation-maximization algorithm (MLE-EM) or its variants, such as the EM algorithm with monotonic constraints and Bayes modal estimation, typically uses a single set of initial values (SIV). The MLE-EM method is sensitive to initial values, especially when dealing with non-convex likelihood functions. This sensitivity implies that different initial values may converge to different local maximum likelihood solutions, but SIV does not guarantee a satisfactory local optimum. Thus, we introduced the multiple sets of initial values (MIV) method to reduce sensitivity to the choice of initial values. We compared MIV and SIV in terms of convergence, log-likelihood values of the converged solutions, parameter recovery, and time consumption under varying conditions of item quality, sample size, attribute correlation, number of initial sets, and convergence settings. The results showed that MIV outperformed SIV in terms of convergence. Applying the MIV method increased the probability of obtaining solutions with higher log-likelihood values. We provide a detailed discussion of this outcome under small sample conditions in which MIV performed worse than SIV.

认知诊断模型(CDMs)被广泛用于评估个体的潜在特征,为量身定制的教学开发提供详细的诊断见解。使用期望最大化算法(MLE-EM)或其变体(例如具有单调约束的EM算法和贝叶斯模态估计)的最大似然估计通常使用单个初始值集(SIV)。MLE-EM方法对初始值很敏感,特别是在处理非凸似然函数时。这种敏感性意味着不同的初始值可能收敛到不同的局部最大似然解,但SIV不能保证令人满意的局部最优解。因此,我们引入了多初始值集(MIV)方法来降低对初始值选择的敏感性。我们比较了MIV和SIV的收敛性、收敛解的对数似然值、参数恢复和在项目质量、样本量、属性相关性、初始集数量和收敛设置等不同条件下的时间消耗。结果表明,MIV在收敛性方面优于SIV。应用MIV方法增加了获得对数似然值较高的解的概率。我们在小样本条件下详细讨论了这一结果,其中MIV的表现比SIV差。
{"title":"Multiple Sets of Initial Values Method for MLE-EM and Its Variants in Cognitive Diagnosis Models","authors":"Yue Zhao,&nbsp;Yuerong Wu,&nbsp;Yanlou Liu,&nbsp;Tao Xin,&nbsp;Yiming Wang","doi":"10.1111/jedm.70005","DOIUrl":"https://doi.org/10.1111/jedm.70005","url":null,"abstract":"<p>Cognitive diagnosis models (CDMs) are widely used to assess individuals’ latent characteristics, offering detailed diagnostic insights for tailored instructional development. Maximum likelihood estimation using the expectation-maximization algorithm (MLE-EM) or its variants, such as the EM algorithm with monotonic constraints and Bayes modal estimation, typically uses a single set of initial values (SIV). The MLE-EM method is sensitive to initial values, especially when dealing with non-convex likelihood functions. This sensitivity implies that different initial values may converge to different local maximum likelihood solutions, but SIV does not guarantee a satisfactory local optimum. Thus, we introduced the multiple sets of initial values (MIV) method to reduce sensitivity to the choice of initial values. We compared MIV and SIV in terms of convergence, log-likelihood values of the converged solutions, parameter recovery, and time consumption under varying conditions of item quality, sample size, attribute correlation, number of initial sets, and convergence settings. The results showed that MIV outperformed SIV in terms of convergence. Applying the MIV method increased the probability of obtaining solutions with higher log-likelihood values. We provide a detailed discussion of this outcome under small sample conditions in which MIV performed worse than SIV.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"639-662"},"PeriodicalIF":1.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Data-Driven Methods for Removing Options in Assessment Items 比较删除评估项目中选项的数据驱动方法
IF 1.6 4区 心理学 Q3 PSYCHOLOGY, APPLIED Pub Date : 2025-09-01 DOI: 10.1111/jedm.70003
William Muntean, Joe Betts, Zhuoran Wang, Hao Jia

Test items with problematic options often require revision to improve their psychometric properties. When an option is identified as ambiguous or nonfunctioning, the traditional approach involves removing the option and conducting another field test to gather new response data—a process that, while effective, is resource-intensive. This study compares two methods for handling option removal: the Retesting method (administering modified items to new examinees) versus the Recalculating method (computationally removing options from existing response data). Through a controlled experiment with multiple-response and matrix-format items, we examined whether these methods produce equivalent item characteristics. Results show striking similarities between methods across multiple psychometric item properties. These findings suggest that the Recalculating method may offer an efficient alternative for items with sufficient option choices. We discuss implementation considerations and present our experimental design and analytical approach as a framework that other testing programs can adapt to evaluate whether the Recalculating method is appropriate for their specific contexts.

带有问题选项的测试项目通常需要修改以改善其心理测量特性。当一个选项被确定为不明确或不起作用时,传统的方法包括删除该选项并进行另一次现场测试以收集新的响应数据——这一过程虽然有效,但需要耗费大量资源。本研究比较了两种处理选项删除的方法:重新测试方法(将修改后的项目管理给新考生)和重新计算方法(从现有的回答数据中计算删除选项)。通过多反应和矩阵格式项目的对照实验,我们检验了这些方法是否产生等效的项目特征。结果显示,在不同的心理测量项目属性之间的方法有着惊人的相似性。这些发现表明,重新计算方法可以为具有足够选项的项目提供有效的替代方案。我们讨论了实现方面的考虑,并提出了我们的实验设计和分析方法,作为其他测试程序可以适应的框架,以评估重新计算方法是否适合其特定环境。
{"title":"Comparing Data-Driven Methods for Removing Options in Assessment Items","authors":"William Muntean,&nbsp;Joe Betts,&nbsp;Zhuoran Wang,&nbsp;Hao Jia","doi":"10.1111/jedm.70003","DOIUrl":"https://doi.org/10.1111/jedm.70003","url":null,"abstract":"<p>Test items with problematic options often require revision to improve their psychometric properties. When an option is identified as ambiguous or nonfunctioning, the traditional approach involves removing the option and conducting another field test to gather new response data—a process that, while effective, is resource-intensive. This study compares two methods for handling option removal: the Retesting method (administering modified items to new examinees) versus the Recalculating method (computationally removing options from existing response data). Through a controlled experiment with multiple-response and matrix-format items, we examined whether these methods produce equivalent item characteristics. Results show striking similarities between methods across multiple psychometric item properties. These findings suggest that the Recalculating method may offer an efficient alternative for items with sufficient option choices. We discuss implementation considerations and present our experimental design and analytical approach as a framework that other testing programs can adapt to evaluate whether the Recalculating method is appropriate for their specific contexts.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"588-607"},"PeriodicalIF":1.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.70003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Educational Measurement
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1