{"title":"利用学习进展评估中学生源评价技能","authors":"Jesse R. Sparks, P. V. van Rijn, P. Deane","doi":"10.1080/10627197.2021.1966299","DOIUrl":null,"url":null,"abstract":"ABSTRACT Effectively evaluating the credibility and accuracy of multiple sources is critical for college readiness. We developed 24 source evaluation tasks spanning four predicted difficulty levels of a hypothesized learning progression (LP) and piloted these tasks to evaluate the utility of an LP-based approach to designing formative literacy assessments. Sixth, seventh, and eighth grade students (N = 360, 120 per grade) completed 12 of the 24 tasks in an online testing session. Analyses examined the tasks’ reliability and validity and whether patterns of performance aligned to predicted LP levels (i.e., recovery of the LP) using task progression maps derived from item response theory (IRT). Results suggested that the LP tasks were reliable and correlated with external measures; however, some lower level tasks proved unexpectedly difficult. Possible explanations for low performance are discussed, followed by implications for future LP and task revisions. This work provides a model for designing and evaluating LP-based literacy assessments.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"26 1","pages":"213 - 240"},"PeriodicalIF":2.1000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Assessing Source Evaluation Skills of Middle School Students Using Learning Progressions\",\"authors\":\"Jesse R. Sparks, P. V. van Rijn, P. Deane\",\"doi\":\"10.1080/10627197.2021.1966299\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Effectively evaluating the credibility and accuracy of multiple sources is critical for college readiness. We developed 24 source evaluation tasks spanning four predicted difficulty levels of a hypothesized learning progression (LP) and piloted these tasks to evaluate the utility of an LP-based approach to designing formative literacy assessments. Sixth, seventh, and eighth grade students (N = 360, 120 per grade) completed 12 of the 24 tasks in an online testing session. Analyses examined the tasks’ reliability and validity and whether patterns of performance aligned to predicted LP levels (i.e., recovery of the LP) using task progression maps derived from item response theory (IRT). Results suggested that the LP tasks were reliable and correlated with external measures; however, some lower level tasks proved unexpectedly difficult. Possible explanations for low performance are discussed, followed by implications for future LP and task revisions. This work provides a model for designing and evaluating LP-based literacy assessments.\",\"PeriodicalId\":46209,\"journal\":{\"name\":\"Educational Assessment\",\"volume\":\"26 1\",\"pages\":\"213 - 240\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Educational Assessment\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/10627197.2021.1966299\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Assessment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/10627197.2021.1966299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Assessing Source Evaluation Skills of Middle School Students Using Learning Progressions
ABSTRACT Effectively evaluating the credibility and accuracy of multiple sources is critical for college readiness. We developed 24 source evaluation tasks spanning four predicted difficulty levels of a hypothesized learning progression (LP) and piloted these tasks to evaluate the utility of an LP-based approach to designing formative literacy assessments. Sixth, seventh, and eighth grade students (N = 360, 120 per grade) completed 12 of the 24 tasks in an online testing session. Analyses examined the tasks’ reliability and validity and whether patterns of performance aligned to predicted LP levels (i.e., recovery of the LP) using task progression maps derived from item response theory (IRT). Results suggested that the LP tasks were reliable and correlated with external measures; however, some lower level tasks proved unexpectedly difficult. Possible explanations for low performance are discussed, followed by implications for future LP and task revisions. This work provides a model for designing and evaluating LP-based literacy assessments.
期刊介绍:
Educational Assessment publishes original research and scholarship on the assessment of individuals, groups, and programs in educational settings. It includes theory, methodological approaches and empirical research in the appraisal of the learning and achievement of students and teachers, young children and adults, and novices and experts. The journal reports on current large-scale testing practices, discusses alternative approaches, presents scholarship on classroom assessment practices and includes assessment topics debated at the national level. It welcomes both conceptual and empirical pieces and encourages articles that provide a strong bridge between theory and/or empirical research and the implications for educational policy and/or practice.