Paul Deane, Duanli Yan, Katherine Castellano, Y. Attali, Michelle Lamar, Mo Zhang, Ian Blood, James V. Bruno, Chen Li, Wenju Cui, Chunyi Ruan, Colleen Appel, Kofi James, Rodolfo Long, Farah Qureshi
{"title":"在形成性论文语料库中塑造写作特质","authors":"Paul Deane, Duanli Yan, Katherine Castellano, Y. Attali, Michelle Lamar, Mo Zhang, Ian Blood, James V. Bruno, Chen Li, Wenju Cui, Chunyi Ruan, Colleen Appel, Kofi James, Rodolfo Long, Farah Qureshi","doi":"10.1002/ets2.12377","DOIUrl":null,"url":null,"abstract":"This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a reasonable fit for essay data from 4th grade to college. It includes an analysis of the test‐retest reliability of each trait, longitudinal trends by trait, both within the school year and from 4th to 12th grades, and analysis of genre differences by trait, using prompts from the Criterion topic library aligned with the major modes of writing (exposition, argumentation, narrative, description, process, comparison and contrast, and cause and effect). It demonstrates that many of the traits are about as reliable as overall e‐rater® scores, that the trait model can be used to build models somewhat more closely aligned with human scores than standard e‐rater models, and that there are large, significant trait differences by genre, consistent with genre differences in trait patterns described in the larger literature. Some of the traits demonstrated clear trends between successive revisions. Students using Criterion appear to have consistently improved grammar, usage, and spelling after getting Criterion feedback and to have marginally improved essay organization. Many of the traits also demonstrated clear grade level trends. These features indicate that the trait model could be used to support more detailed scoring and reporting for writing assessments and learning tools.","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":" 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modeling Writing Traits in a Formative Essay Corpus\",\"authors\":\"Paul Deane, Duanli Yan, Katherine Castellano, Y. Attali, Michelle Lamar, Mo Zhang, Ian Blood, James V. Bruno, Chen Li, Wenju Cui, Chunyi Ruan, Colleen Appel, Kofi James, Rodolfo Long, Farah Qureshi\",\"doi\":\"10.1002/ets2.12377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a reasonable fit for essay data from 4th grade to college. It includes an analysis of the test‐retest reliability of each trait, longitudinal trends by trait, both within the school year and from 4th to 12th grades, and analysis of genre differences by trait, using prompts from the Criterion topic library aligned with the major modes of writing (exposition, argumentation, narrative, description, process, comparison and contrast, and cause and effect). It demonstrates that many of the traits are about as reliable as overall e‐rater® scores, that the trait model can be used to build models somewhat more closely aligned with human scores than standard e‐rater models, and that there are large, significant trait differences by genre, consistent with genre differences in trait patterns described in the larger literature. Some of the traits demonstrated clear trends between successive revisions. Students using Criterion appear to have consistently improved grammar, usage, and spelling after getting Criterion feedback and to have marginally improved essay organization. Many of the traits also demonstrated clear grade level trends. These features indicate that the trait model could be used to support more detailed scoring and reporting for writing assessments and learning tools.\",\"PeriodicalId\":11972,\"journal\":{\"name\":\"ETS Research Report Series\",\"volume\":\" 10\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ETS Research Report Series\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/ets2.12377\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETS Research Report Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/ets2.12377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
Modeling Writing Traits in a Formative Essay Corpus
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a reasonable fit for essay data from 4th grade to college. It includes an analysis of the test‐retest reliability of each trait, longitudinal trends by trait, both within the school year and from 4th to 12th grades, and analysis of genre differences by trait, using prompts from the Criterion topic library aligned with the major modes of writing (exposition, argumentation, narrative, description, process, comparison and contrast, and cause and effect). It demonstrates that many of the traits are about as reliable as overall e‐rater® scores, that the trait model can be used to build models somewhat more closely aligned with human scores than standard e‐rater models, and that there are large, significant trait differences by genre, consistent with genre differences in trait patterns described in the larger literature. Some of the traits demonstrated clear trends between successive revisions. Students using Criterion appear to have consistently improved grammar, usage, and spelling after getting Criterion feedback and to have marginally improved essay organization. Many of the traits also demonstrated clear grade level trends. These features indicate that the trait model could be used to support more detailed scoring and reporting for writing assessments and learning tools.