Frank Goldhammer, Ulf Kroehne, Carolin Hahnel, Johannes Naumann, Paul De Boeck
{"title":"Does Timed Testing Affect the Interpretation of Efficiency Scores?—A GLMM Analysis of Reading Components","authors":"Frank Goldhammer, Ulf Kroehne, Carolin Hahnel, Johannes Naumann, Paul De Boeck","doi":"10.1111/jedm.12393","DOIUrl":null,"url":null,"abstract":"<p>The efficiency of cognitive component skills is typically assessed with speeded performance tests. Interpreting only effective ability or effective speed as efficiency may be challenging because of the within-person dependency between both variables (speed-ability tradeoff, SAT). The present study measures efficiency as effective ability conditional on speed by controlling speed experimentally. Item-level time limits control the stimulus presentation time and the time window for responding (timed condition). The overall goal was to examine the construct validity of effective ability scores obtained from untimed and timed condition by comparing the effects of theory-based item properties on item difficulty. If such effects exist, the scores reflect how well the test-takers were able to cope with the theory-based requirements. A German subsample from PISA 2012 completed two reading component skills tasks (i.e., word recognition and semantic integration) with and without item-level time limits. Overall, the included linguistic item properties showed stronger effects on item difficulty in the timed than the untimed condition. In the semantic integration task, item properties explained the time required in the untimed condition. The results suggest that effective ability scores in the timed condition better reflect how well test-takers were able to cope with the theoretically relevant task demands.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"61 3","pages":"349-377"},"PeriodicalIF":1.4000,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.12393","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Educational Measurement","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jedm.12393","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
The efficiency of cognitive component skills is typically assessed with speeded performance tests. Interpreting only effective ability or effective speed as efficiency may be challenging because of the within-person dependency between both variables (speed-ability tradeoff, SAT). The present study measures efficiency as effective ability conditional on speed by controlling speed experimentally. Item-level time limits control the stimulus presentation time and the time window for responding (timed condition). The overall goal was to examine the construct validity of effective ability scores obtained from untimed and timed condition by comparing the effects of theory-based item properties on item difficulty. If such effects exist, the scores reflect how well the test-takers were able to cope with the theory-based requirements. A German subsample from PISA 2012 completed two reading component skills tasks (i.e., word recognition and semantic integration) with and without item-level time limits. Overall, the included linguistic item properties showed stronger effects on item difficulty in the timed than the untimed condition. In the semantic integration task, item properties explained the time required in the untimed condition. The results suggest that effective ability scores in the timed condition better reflect how well test-takers were able to cope with the theoretically relevant task demands.
期刊介绍:
The Journal of Educational Measurement (JEM) publishes original measurement research, provides reviews of measurement publications, and reports on innovative measurement applications. The topics addressed will interest those concerned with the practice of measurement in field settings, as well as be of interest to measurement theorists. In addition to presenting new contributions to measurement theory and practice, JEM also serves as a vehicle for improving educational measurement applications in a variety of settings.