Inadequate test-taking effort poses a significant challenge, particularly when low-stakes test results inform high-stakes policy and psychometric decisions. We examined how rapid guessing (RG), a common form of low test-taking effort, biases item parameter estimates, particularly the discrimination and difficulty parameters. Previous research reported conflicting findings on the direction of bias and what contributes to it. Using simulated data that replicate real-world, low-stakes testing conditions, this study reconciles the inconsistencies by identifying the conditions under which item parameters are over- or underestimated. Bias is influenced by item-related factors (true parameter values and the number of RG responses the items receive) and examinee-related factors (proficiency differences between rapid guessers and non-rapid guessers, the variability in RG behavior among rapid guessers, and the pattern of RG responses throughout the test). The findings highlight that ignoring RG not only distorts proficiency estimates but may also impact broader test operations, including adaptive testing, equating, and standard setting. By demonstrating the potential far-reaching effects of RG, we underline the need for testing professionals to implement methods that mitigate RG's impact (such as motivation filtering) to protect the integrity of their psychometric work.
{"title":"From Item Estimates to Test Operations: The Cascading Effect of Rapid Guessing","authors":"Sarah Alahmadi, Christine E. DeMars","doi":"10.1111/jedm.70010","DOIUrl":"https://doi.org/10.1111/jedm.70010","url":null,"abstract":"<p>Inadequate test-taking effort poses a significant challenge, particularly when low-stakes test results inform high-stakes policy and psychometric decisions. We examined how rapid guessing (RG), a common form of low test-taking effort, biases item parameter estimates, particularly the discrimination and difficulty parameters. Previous research reported conflicting findings on the direction of bias and what contributes to it. Using simulated data that replicate real-world, low-stakes testing conditions, this study reconciles the inconsistencies by identifying the conditions under which item parameters are over- or underestimated. Bias is influenced by item-related factors (true parameter values and the number of RG responses the items receive) and examinee-related factors (proficiency differences between rapid guessers and non-rapid guessers, the variability in RG behavior among rapid guessers, and the pattern of RG responses throughout the test). The findings highlight that ignoring RG not only distorts proficiency estimates but may also impact broader test operations, including adaptive testing, equating, and standard setting. By demonstrating the potential far-reaching effects of RG, we underline the need for testing professionals to implement methods that mitigate RG's impact (such as motivation filtering) to protect the integrity of their psychometric work.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"740-762"},"PeriodicalIF":1.6,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.70010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145754592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Special Issue: Adaptive Testing in Large-Scale Assessments","authors":"Peter van Rijn, Francesco Avvisati","doi":"10.1111/jedm.70009","DOIUrl":"https://doi.org/10.1111/jedm.70009","url":null,"abstract":"","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 3","pages":"385-391"},"PeriodicalIF":1.6,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145341777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joseph H. Grochowalski, Lei Wan, Lauren Molin, Amy H. Hendrickson
The Beuk standard setting method derives cut scores through expert judgment that balances content and normative perspectives. This study developed a method to estimate confidence intervals for Beuk settings and assessed their accuracy via simulations. Simulations varied SME panel size, expert agreement, cut score locations, score distributions, and decision alignment. Panels with 20+ participants provided precise and accurate cut score estimates if strongly agreed upon. Larger panels did not improve precision significantly. Cut score location influenced confidence interval widths, highlighting its importance in planning. Real data showed SME disagreement increased bias and variance of Beuk estimates. Use Beuk cut scores cautiously with small panels, flat score distributions, or significant expert disagreement.
{"title":"The Precision and Bias of Cut Score Estimates from the Beuk Standard Setting Method","authors":"Joseph H. Grochowalski, Lei Wan, Lauren Molin, Amy H. Hendrickson","doi":"10.1111/jedm.70007","DOIUrl":"https://doi.org/10.1111/jedm.70007","url":null,"abstract":"<p>The Beuk standard setting method derives cut scores through expert judgment that balances content and normative perspectives. This study developed a method to estimate confidence intervals for Beuk settings and assessed their accuracy via simulations. Simulations varied SME panel size, expert agreement, cut score locations, score distributions, and decision alignment. Panels with 20+ participants provided precise and accurate cut score estimates if strongly agreed upon. Larger panels did not improve precision significantly. Cut score location influenced confidence interval widths, highlighting its importance in planning. Real data showed SME disagreement increased bias and variance of Beuk estimates. Use Beuk cut scores cautiously with small panels, flat score distributions, or significant expert disagreement.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"687-717"},"PeriodicalIF":1.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional methods for detecting cheating on assessments tend to focus on either identifying cheaters or compromised items in isolation, overlooking their interconnection. In this study, we present a novel biclustering approach that simultaneously detects both cheaters and compromised items by identifying coherent subgroups of examinees and items exhibiting suspicious response patterns. To identify these patterns, our method leverages response accuracy, response time, and distractor choice data. We evaluated the approach on real datasets and compared its performance with existing detection approaches. Additionally, a comprehensive simulation study was conducted, modeling a variety of realistic cheating scenarios such as answer copying, pre-knowledge of test items, and distinct forms of rapid guessing. Our findings revealed that the biclustering method outperformed previous methods in simultaneously distinguishing cheating and non-cheating behaviors within the empirical study. The simulation analyses further revealed the conditions under which the biclustering approach was most effective in both regards. Overall, the findings underscore the flexibility of biclustering and its adaptability in enhancing test security within diverse testing environments.
{"title":"Simultaneous Detection of Cheaters and Compromised Items Using a Biclustering Approach","authors":"Hyeryung Lee, Walter P. Vispoel","doi":"10.1111/jedm.70004","DOIUrl":"https://doi.org/10.1111/jedm.70004","url":null,"abstract":"<p>Traditional methods for detecting cheating on assessments tend to focus on either identifying cheaters or compromised items in isolation, overlooking their interconnection. In this study, we present a novel biclustering approach that simultaneously detects both cheaters and compromised items by identifying coherent subgroups of examinees and items exhibiting suspicious response patterns. To identify these patterns, our method leverages response accuracy, response time, and distractor choice data. We evaluated the approach on real datasets and compared its performance with existing detection approaches. Additionally, a comprehensive simulation study was conducted, modeling a variety of realistic cheating scenarios such as answer copying, pre-knowledge of test items, and distinct forms of rapid guessing. Our findings revealed that the biclustering method outperformed previous methods in simultaneously distinguishing cheating and non-cheating behaviors within the empirical study. The simulation analyses further revealed the conditions under which the biclustering approach was most effective in both regards. Overall, the findings underscore the flexibility of biclustering and its adaptability in enhancing test security within diverse testing environments.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"608-638"},"PeriodicalIF":1.6,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates the estimation of classification consistency and accuracy indices for composite summed and theta scores within the SS-MIRT framework, using five popular approaches, including the Lee, Rudner, Guo, Bayesian EAP, and Bayesian MCMC approaches. The procedures are illustrated through analysis of two real datasets and further evaluated via a simulation study under various conditions. Overall, results indicated that all five approaches performed well, producing classification indices estimates that were highly consistent in both magnitude and pattern. However, the results also indicated that factors such as the ability estimator, score metric, and cut score location can significantly influence estimation outcomes. Consequently, these considerations should guide practitioners in selecting the most appropriate estimation approach for their specific assessment context.
{"title":"Classification Consistency and Accuracy Indices for Simple Structure MIRT Model","authors":"Huan Liu, Won-Chan Lee","doi":"10.1111/jedm.70006","DOIUrl":"https://doi.org/10.1111/jedm.70006","url":null,"abstract":"<p>This study investigates the estimation of classification consistency and accuracy indices for composite summed and theta scores within the SS-MIRT framework, using five popular approaches, including the Lee, Rudner, Guo, Bayesian EAP, and Bayesian MCMC approaches. The procedures are illustrated through analysis of two real datasets and further evaluated via a simulation study under various conditions. Overall, results indicated that all five approaches performed well, producing classification indices estimates that were highly consistent in both magnitude and pattern. However, the results also indicated that factors such as the ability estimator, score metric, and cut score location can significantly influence estimation outcomes. Consequently, these considerations should guide practitioners in selecting the most appropriate estimation approach for their specific assessment context.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"663-686"},"PeriodicalIF":1.6,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.70006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yue Zhao, Yuerong Wu, Yanlou Liu, Tao Xin, Yiming Wang
Cognitive diagnosis models (CDMs) are widely used to assess individuals’ latent characteristics, offering detailed diagnostic insights for tailored instructional development. Maximum likelihood estimation using the expectation-maximization algorithm (MLE-EM) or its variants, such as the EM algorithm with monotonic constraints and Bayes modal estimation, typically uses a single set of initial values (SIV). The MLE-EM method is sensitive to initial values, especially when dealing with non-convex likelihood functions. This sensitivity implies that different initial values may converge to different local maximum likelihood solutions, but SIV does not guarantee a satisfactory local optimum. Thus, we introduced the multiple sets of initial values (MIV) method to reduce sensitivity to the choice of initial values. We compared MIV and SIV in terms of convergence, log-likelihood values of the converged solutions, parameter recovery, and time consumption under varying conditions of item quality, sample size, attribute correlation, number of initial sets, and convergence settings. The results showed that MIV outperformed SIV in terms of convergence. Applying the MIV method increased the probability of obtaining solutions with higher log-likelihood values. We provide a detailed discussion of this outcome under small sample conditions in which MIV performed worse than SIV.
{"title":"Multiple Sets of Initial Values Method for MLE-EM and Its Variants in Cognitive Diagnosis Models","authors":"Yue Zhao, Yuerong Wu, Yanlou Liu, Tao Xin, Yiming Wang","doi":"10.1111/jedm.70005","DOIUrl":"https://doi.org/10.1111/jedm.70005","url":null,"abstract":"<p>Cognitive diagnosis models (CDMs) are widely used to assess individuals’ latent characteristics, offering detailed diagnostic insights for tailored instructional development. Maximum likelihood estimation using the expectation-maximization algorithm (MLE-EM) or its variants, such as the EM algorithm with monotonic constraints and Bayes modal estimation, typically uses a single set of initial values (SIV). The MLE-EM method is sensitive to initial values, especially when dealing with non-convex likelihood functions. This sensitivity implies that different initial values may converge to different local maximum likelihood solutions, but SIV does not guarantee a satisfactory local optimum. Thus, we introduced the multiple sets of initial values (MIV) method to reduce sensitivity to the choice of initial values. We compared MIV and SIV in terms of convergence, log-likelihood values of the converged solutions, parameter recovery, and time consumption under varying conditions of item quality, sample size, attribute correlation, number of initial sets, and convergence settings. The results showed that MIV outperformed SIV in terms of convergence. Applying the MIV method increased the probability of obtaining solutions with higher log-likelihood values. We provide a detailed discussion of this outcome under small sample conditions in which MIV performed worse than SIV.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"639-662"},"PeriodicalIF":1.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Test items with problematic options often require revision to improve their psychometric properties. When an option is identified as ambiguous or nonfunctioning, the traditional approach involves removing the option and conducting another field test to gather new response data—a process that, while effective, is resource-intensive. This study compares two methods for handling option removal: the Retesting method (administering modified items to new examinees) versus the Recalculating method (computationally removing options from existing response data). Through a controlled experiment with multiple-response and matrix-format items, we examined whether these methods produce equivalent item characteristics. Results show striking similarities between methods across multiple psychometric item properties. These findings suggest that the Recalculating method may offer an efficient alternative for items with sufficient option choices. We discuss implementation considerations and present our experimental design and analytical approach as a framework that other testing programs can adapt to evaluate whether the Recalculating method is appropriate for their specific contexts.
{"title":"Comparing Data-Driven Methods for Removing Options in Assessment Items","authors":"William Muntean, Joe Betts, Zhuoran Wang, Hao Jia","doi":"10.1111/jedm.70003","DOIUrl":"https://doi.org/10.1111/jedm.70003","url":null,"abstract":"<p>Test items with problematic options often require revision to improve their psychometric properties. When an option is identified as ambiguous or nonfunctioning, the traditional approach involves removing the option and conducting another field test to gather new response data—a process that, while effective, is resource-intensive. This study compares two methods for handling option removal: the Retesting method (administering modified items to new examinees) versus the Recalculating method (computationally removing options from existing response data). Through a controlled experiment with multiple-response and matrix-format items, we examined whether these methods produce equivalent item characteristics. Results show striking similarities between methods across multiple psychometric item properties. These findings suggest that the Recalculating method may offer an efficient alternative for items with sufficient option choices. We discuss implementation considerations and present our experimental design and analytical approach as a framework that other testing programs can adapt to evaluate whether the Recalculating method is appropriate for their specific contexts.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"588-607"},"PeriodicalIF":1.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.70003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingfeng Xue, Yunting Liu, Xingyao Xiao, Mark Wilson
Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A baseline was established by providing LLMs with the original human-scoring instructions and materials. APE was then applied to optimize prompts for each item. We found that on average, APE increased scoring accuracy by 9%; few-shot learning (i.e., giving multiple labeled examples related to the goal) increased APE performance by 2%; a high temperature (i.e., a parameter for output randomness) was needed in at least part of the APE to improve the scoring accuracy; Quadratic Weighted Kappa (QWK) showed a similar pattern. These findings support the use of APE in automatic scoring. Moreover, compared with the manual scoring instructions, APE tended to restate and reformat the scoring prompts, which could give rise to concerns about validity. Thus, the creative variability introduced by LLMs raises considerations about the balance between innovation and adherence to scoring rubrics.
{"title":"Automatic Prompt Engineering for Automatic Scoring","authors":"Mingfeng Xue, Yunting Liu, Xingyao Xiao, Mark Wilson","doi":"10.1111/jedm.70002","DOIUrl":"https://doi.org/10.1111/jedm.70002","url":null,"abstract":"<p>Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A baseline was established by providing LLMs with the original human-scoring instructions and materials. APE was then applied to optimize prompts for each item. We found that on average, APE increased scoring accuracy by 9%; few-shot learning (i.e., giving multiple labeled examples related to the goal) increased APE performance by 2%; a high temperature (i.e., a parameter for output randomness) was needed in at least part of the APE to improve the scoring accuracy; Quadratic Weighted Kappa (QWK) showed a similar pattern. These findings support the use of APE in automatic scoring. Moreover, compared with the manual scoring instructions, APE tended to restate and reformat the scoring prompts, which could give rise to concerns about validity. Thus, the creative variability introduced by LLMs raises considerations about the balance between innovation and adherence to scoring rubrics.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"559-587"},"PeriodicalIF":1.6,"publicationDate":"2025-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul A. Jewsbury, Daniel F. McCaffrey, Yue Jia, Eugenio J. Gonzalez
Large-scale survey assessments (LSAs) such as NAEP, TIMSS, PIRLS, IELS, and NAPLAN produce plausible values of student proficiency for estimating population statistics. Plausible values are imputed values for latent proficiency variables. While prominently used for LSAs, they are applicable to a wide range of latent variable modelling contexts such as surveys about psychological dispositions or beliefs. Following the practice of multiple imputation, LSAs produce multiple sets of plausible values for each survey. The criteria used to determine the number of plausible values remains unresolved and is inconsistent in practice. We show analytically and via simulation that the number of plausible values used determines the amount of Monte Carlo error on point estimates and standard errors as a function of the fraction of missing information. We derive expressions to determine the number of plausible values required to reach a given level of precision. We analyze real data from a LSA to provide guidelines supported by theory, simulation, and real data on the number of plausible values. Finally, we illustrate the impact with a power analysis. Our results show there is meaningful benefit to the use of greater numbers of plausible values than currently generated by LSAs.
{"title":"How Many Plausible Values?","authors":"Paul A. Jewsbury, Daniel F. McCaffrey, Yue Jia, Eugenio J. Gonzalez","doi":"10.1111/jedm.70000","DOIUrl":"https://doi.org/10.1111/jedm.70000","url":null,"abstract":"<p>Large-scale survey assessments (LSAs) such as NAEP, TIMSS, PIRLS, IELS, and NAPLAN produce plausible values of student proficiency for estimating population statistics. Plausible values are imputed values for latent proficiency variables. While prominently used for LSAs, they are applicable to a wide range of latent variable modelling contexts such as surveys about psychological dispositions or beliefs. Following the practice of multiple imputation, LSAs produce multiple sets of plausible values for each survey. The criteria used to determine the number of plausible values remains unresolved and is inconsistent in practice. We show analytically and via simulation that the number of plausible values used determines the amount of Monte Carlo error on point estimates and standard errors as a function of the fraction of missing information. We derive expressions to determine the number of plausible values required to reach a given level of precision. We analyze real data from a LSA to provide guidelines supported by theory, simulation, and real data on the number of plausible values. Finally, we illustrate the impact with a power analysis. Our results show there is meaningful benefit to the use of greater numbers of plausible values than currently generated by LSAs.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 4","pages":"531-558"},"PeriodicalIF":1.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While testlets have proven useful for assessing complex skills, the stem shared by multiple items often induces correlations between responses, leading to violations of local independence (LI), which can result in biased parameter and ability estimates. Diagnostic procedures for detecting testlet effects typically involve model comparisons testing for the inclusion of extra testlet parameters or, at the item level, testing for pairwise LI. Rosenbaum's adaptation of the Mantel-Haenszel (MH)