{"title":"Establishing analytic score profiles for large-scale L2 writing assessment: The case of the CET-4 writing test","authors":"Shaoyan Zou , Xun Yan , Jason Fan","doi":"10.1016/j.asw.2024.100826","DOIUrl":null,"url":null,"abstract":"<div><p>This study addresses a critical need in large-scale L2 writing assessment by emphasizing the significance of tailoring assessments to specific teaching and learning contexts. Focusing on the CET-4 writing test in China, the research unfolded in two phases. In Phase I, an empirically-developed analytic rating scale designed for the CET-4 writing test was rigorously validated. Twenty-one raters used this scale to rate 30 essays, and Many-Facets Rasch Model (MFRM) analysis was performed on the rating data. The outcomes demonstrate the scale’s robustness in effectively differentiating examinees’ writing performance, ensuring consistency among raters, and mitigating rater variation at both individual and group level. Phase II extends the research scope by applying the validated scale to score 142 CET-4 writing scripts. Utilizing Hierarchical and K-Means cluster analyses, this phase unveils three distinct score profiles. These findings are significant for both the CET-4 writing test and other L2 large-scale writing assessment. Theoretically, this study introduces a perspective that aims to enhance our understanding of learners’ performance in large-scale L2 writing assessment. Methodologically, this study presents a framework that integrates the validation of the rating scale with the identification of distinct score clusters, thus aiming to provide a more detailed solution for tailoring assessments to specific learning contexts.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":4.2000,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessing Writing","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1075293524000199","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
This study addresses a critical need in large-scale L2 writing assessment by emphasizing the significance of tailoring assessments to specific teaching and learning contexts. Focusing on the CET-4 writing test in China, the research unfolded in two phases. In Phase I, an empirically-developed analytic rating scale designed for the CET-4 writing test was rigorously validated. Twenty-one raters used this scale to rate 30 essays, and Many-Facets Rasch Model (MFRM) analysis was performed on the rating data. The outcomes demonstrate the scale’s robustness in effectively differentiating examinees’ writing performance, ensuring consistency among raters, and mitigating rater variation at both individual and group level. Phase II extends the research scope by applying the validated scale to score 142 CET-4 writing scripts. Utilizing Hierarchical and K-Means cluster analyses, this phase unveils three distinct score profiles. These findings are significant for both the CET-4 writing test and other L2 large-scale writing assessment. Theoretically, this study introduces a perspective that aims to enhance our understanding of learners’ performance in large-scale L2 writing assessment. Methodologically, this study presents a framework that integrates the validation of the rating scale with the identification of distinct score clusters, thus aiming to provide a more detailed solution for tailoring assessments to specific learning contexts.
期刊介绍:
Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional (direct and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development.