{"title":"技术增强项目和模型数据不匹配","authors":"Carol Eckerly, Yue Jia, Paul Jewsbury","doi":"10.1002/ets2.12353","DOIUrl":null,"url":null,"abstract":"<p>Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet referred to as a branching item. Under the branching format, all test takers are assigned to a common question, and the assignment of the next question relies on the response to the first question through deterministic rules. In addition, the items at both stages are scored together as one polytomous item. Real and simulated examples are provided to discuss challenges in applying IRT models to branching items. We find that model–data misfit is likely to occur when branching items are scored as polytomous items and modeled with the generalized partial credit model and that the relationship between the discrimination of the routing component and the discriminations of the subsequent components seemed to drive the misfit. We conclude with lessons learned and provide suggested guidelines and considerations for operationalizing the use of branching items in future assessments.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-16"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12353","citationCount":"0","resultStr":"{\"title\":\"Technology-Enhanced Items and Model–Data Misfit\",\"authors\":\"Carol Eckerly, Yue Jia, Paul Jewsbury\",\"doi\":\"10.1002/ets2.12353\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet referred to as a branching item. Under the branching format, all test takers are assigned to a common question, and the assignment of the next question relies on the response to the first question through deterministic rules. In addition, the items at both stages are scored together as one polytomous item. Real and simulated examples are provided to discuss challenges in applying IRT models to branching items. We find that model–data misfit is likely to occur when branching items are scored as polytomous items and modeled with the generalized partial credit model and that the relationship between the discrimination of the routing component and the discriminations of the subsequent components seemed to drive the misfit. We conclude with lessons learned and provide suggested guidelines and considerations for operationalizing the use of branching items in future assessments.</p>\",\"PeriodicalId\":11972,\"journal\":{\"name\":\"ETS Research Report Series\",\"volume\":\"2022 1\",\"pages\":\"1-16\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12353\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ETS Research Report Series\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ets2.12353\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETS Research Report Series","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ets2.12353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet referred to as a branching item. Under the branching format, all test takers are assigned to a common question, and the assignment of the next question relies on the response to the first question through deterministic rules. In addition, the items at both stages are scored together as one polytomous item. Real and simulated examples are provided to discuss challenges in applying IRT models to branching items. We find that model–data misfit is likely to occur when branching items are scored as polytomous items and modeled with the generalized partial credit model and that the relationship between the discrimination of the routing component and the discriminations of the subsequent components seemed to drive the misfit. We conclude with lessons learned and provide suggested guidelines and considerations for operationalizing the use of branching items in future assessments.