{"title":"小学生写作质量的人工评分与自动评分:多元概化理论的应用","authors":"Dandan Chen, Michael A. Hebert, Joshua Wilson","doi":"10.3102/00028312221106773","DOIUrl":null,"url":null,"abstract":"We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3–5 drawn from a larger study. Students wrote six essays across three genres. All essays were hand-scored by four raters and an AES system called Project Essay Grade (PEG). Both scoring methods were highly reliable, but PEG was more reliable for non-struggling students, while hand-scoring was more reliable for struggling students. We provide recommendations regarding ways of optimizing writing assessment and blending hand-scoring with AES.","PeriodicalId":48375,"journal":{"name":"American Educational Research Journal","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Examining Human and Automated Ratings of Elementary Students’ Writing Quality: A Multivariate Generalizability Theory Application\",\"authors\":\"Dandan Chen, Michael A. Hebert, Joshua Wilson\",\"doi\":\"10.3102/00028312221106773\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3–5 drawn from a larger study. Students wrote six essays across three genres. All essays were hand-scored by four raters and an AES system called Project Essay Grade (PEG). Both scoring methods were highly reliable, but PEG was more reliable for non-struggling students, while hand-scoring was more reliable for struggling students. We provide recommendations regarding ways of optimizing writing assessment and blending hand-scoring with AES.\",\"PeriodicalId\":48375,\"journal\":{\"name\":\"American Educational Research Journal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2022-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Educational Research Journal\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.3102/00028312221106773\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Educational Research Journal","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.3102/00028312221106773","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Examining Human and Automated Ratings of Elementary Students’ Writing Quality: A Multivariate Generalizability Theory Application
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3–5 drawn from a larger study. Students wrote six essays across three genres. All essays were hand-scored by four raters and an AES system called Project Essay Grade (PEG). Both scoring methods were highly reliable, but PEG was more reliable for non-struggling students, while hand-scoring was more reliable for struggling students. We provide recommendations regarding ways of optimizing writing assessment and blending hand-scoring with AES.
期刊介绍:
The American Educational Research Journal (AERJ) is the flagship journal of the American Educational Research Association, featuring articles that advance the empirical, theoretical, and methodological understanding of education and learning. It publishes original peer-reviewed analyses that span the field of education research across all subfields and disciplines and all levels of analysis. It also encourages submissions across all levels of education throughout the life span and all forms of learning. AERJ welcomes submissions of the highest quality, reflecting a wide range of perspectives, topics, contexts, and methods, including interdisciplinary and multidisciplinary work.