Esther Ulitzsch, Christiane Penk, Matthias von Davier, S. Pohl
{"title":"Model meets reality: Validating a new behavioral measure for test-taking effort","authors":"Esther Ulitzsch, Christiane Penk, Matthias von Davier, S. Pohl","doi":"10.1080/10627197.2020.1858786","DOIUrl":null,"url":null,"abstract":"ABSTRACT Identifying and considering test-taking effort is of utmost importance for drawing valid inferences on examinee competency in low-stakes tests. Different approaches exist for doing so. The speed-accuracy+engagement model aims at identifying non-effortful test-taking behavior in terms of nonresponse and rapid guessing based on responses and response times. The model allows for identifying rapid-guessing behavior on the item-by-examinee level whilst jointly modeling the processes underlying rapid guessing and effortful responding. To assess whether the model indeed provides a valid measure of test-taking effort, we investigate (1) convergent validity with previously developed behavioral as well as self-report measures on guessing behavior and effort, (2) fit within the nomological network of test-taking motivation derived from expectancy-value theory, and (3) ability to detect differences between groups that can be assumed to differ in test-taking effort. Results suggest that the model captures central aspects of non-effortful test-taking behavior. While it does not cover the whole spectrum of non-effortful test-taking behavior, it provides a measure for some aspects of it, in a manner that is less subjective than self-reports. The article concludes with a discussion of implications for the development of behavioral measures of non-effortful test-taking behavior.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"26 1","pages":"104 - 124"},"PeriodicalIF":2.1000,"publicationDate":"2021-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2020.1858786","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Assessment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/10627197.2020.1858786","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 17
Abstract
ABSTRACT Identifying and considering test-taking effort is of utmost importance for drawing valid inferences on examinee competency in low-stakes tests. Different approaches exist for doing so. The speed-accuracy+engagement model aims at identifying non-effortful test-taking behavior in terms of nonresponse and rapid guessing based on responses and response times. The model allows for identifying rapid-guessing behavior on the item-by-examinee level whilst jointly modeling the processes underlying rapid guessing and effortful responding. To assess whether the model indeed provides a valid measure of test-taking effort, we investigate (1) convergent validity with previously developed behavioral as well as self-report measures on guessing behavior and effort, (2) fit within the nomological network of test-taking motivation derived from expectancy-value theory, and (3) ability to detect differences between groups that can be assumed to differ in test-taking effort. Results suggest that the model captures central aspects of non-effortful test-taking behavior. While it does not cover the whole spectrum of non-effortful test-taking behavior, it provides a measure for some aspects of it, in a manner that is less subjective than self-reports. The article concludes with a discussion of implications for the development of behavioral measures of non-effortful test-taking behavior.
期刊介绍:
Educational Assessment publishes original research and scholarship on the assessment of individuals, groups, and programs in educational settings. It includes theory, methodological approaches and empirical research in the appraisal of the learning and achievement of students and teachers, young children and adults, and novices and experts. The journal reports on current large-scale testing practices, discusses alternative approaches, presents scholarship on classroom assessment practices and includes assessment topics debated at the national level. It welcomes both conceptual and empirical pieces and encourages articles that provide a strong bridge between theory and/or empirical research and the implications for educational policy and/or practice.