{"title":"来源使用特征会影响评分者对论证的判断吗?实验研究","authors":"Ping-Lin Chuang","doi":"10.1177/02655322241263629","DOIUrl":null,"url":null,"abstract":"This experimental study explores how source use features impact raters’ judgment of argumentation in a second language (L2) integrated writing test. One hundred four experienced and novice raters were recruited to complete a rating task that simulated the scoring assignment of a local English Placement Test (EPT). Sixty written responses were adapted from essays written by EPT test-takers. These responses were crafted to reflect different conditions of source use features, namely source use quantity and quality. Rater scores were analyzed using the many-facet Rasch model and mixed two-way analyses of variance (ANOVAs) to examine how they are affected by source use features and rater experience. Results show that source use features impacted the argumentation scores assigned by raters. Paragraphs with more source text ideas that are better incorporated received the highest argumentation scores, and vice versa for those with limited, poorly integrated source information. Rater experience impacted scores but did not influence rater performance meaningfully. The findings of this study connect specific source use features with raters’ evaluation of argumentation, helping to further disentangle the relationships among examinee performance, rater decision, and task features of integrated argumentative writing tests. They also provide meaningful implications for writing assessment research and practices.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"177 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Do source use features impact raters’ judgment of argumentation? An experimental study\",\"authors\":\"Ping-Lin Chuang\",\"doi\":\"10.1177/02655322241263629\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This experimental study explores how source use features impact raters’ judgment of argumentation in a second language (L2) integrated writing test. One hundred four experienced and novice raters were recruited to complete a rating task that simulated the scoring assignment of a local English Placement Test (EPT). Sixty written responses were adapted from essays written by EPT test-takers. These responses were crafted to reflect different conditions of source use features, namely source use quantity and quality. Rater scores were analyzed using the many-facet Rasch model and mixed two-way analyses of variance (ANOVAs) to examine how they are affected by source use features and rater experience. Results show that source use features impacted the argumentation scores assigned by raters. Paragraphs with more source text ideas that are better incorporated received the highest argumentation scores, and vice versa for those with limited, poorly integrated source information. Rater experience impacted scores but did not influence rater performance meaningfully. The findings of this study connect specific source use features with raters’ evaluation of argumentation, helping to further disentangle the relationships among examinee performance, rater decision, and task features of integrated argumentative writing tests. They also provide meaningful implications for writing assessment research and practices.\",\"PeriodicalId\":17928,\"journal\":{\"name\":\"Language Testing\",\"volume\":\"177 1\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language Testing\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1177/02655322241263629\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language Testing","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/02655322241263629","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
Do source use features impact raters’ judgment of argumentation? An experimental study
This experimental study explores how source use features impact raters’ judgment of argumentation in a second language (L2) integrated writing test. One hundred four experienced and novice raters were recruited to complete a rating task that simulated the scoring assignment of a local English Placement Test (EPT). Sixty written responses were adapted from essays written by EPT test-takers. These responses were crafted to reflect different conditions of source use features, namely source use quantity and quality. Rater scores were analyzed using the many-facet Rasch model and mixed two-way analyses of variance (ANOVAs) to examine how they are affected by source use features and rater experience. Results show that source use features impacted the argumentation scores assigned by raters. Paragraphs with more source text ideas that are better incorporated received the highest argumentation scores, and vice versa for those with limited, poorly integrated source information. Rater experience impacted scores but did not influence rater performance meaningfully. The findings of this study connect specific source use features with raters’ evaluation of argumentation, helping to further disentangle the relationships among examinee performance, rater decision, and task features of integrated argumentative writing tests. They also provide meaningful implications for writing assessment research and practices.
期刊介绍:
Language Testing is a fully peer reviewed international journal that publishes original research and review articles on language testing and assessment. It provides a forum for the exchange of ideas and information between people working in the fields of first and second language testing and assessment. This includes researchers and practitioners in EFL and ESL testing, and assessment in child language acquisition and language pathology. In addition, special attention is focused on issues of testing theory, experimental investigations, and the following up of practical implications.