Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100767
Xun Yan , Jiani Lin
The orthographic and morphological system of Mandarin Chinese requires more time and developmental stages for learners to acquire. This source of difficulty might present unique challenges and opportunities for writing assessment for Chinese as a Second Language (CSL). This study employed a corpus-based approach to examine the accuracy features of 10,750 essays written by test-takers from 17 first language (L1) backgrounds on the HSK test. Based on both orthographic types and economic-geopolitical factors, we classified test-taker L1s into 3 groups. We first factor-analyzed a comprehensive array of error types to identify the underlying dimensions of Chinese writing accuracy. Then, dimension scores were included in regression models to predict HSK writing scores for different L1 groups. The results revealed five dimensions related to syntactic, morphological, and lexical errors. Among them, dimensions on character and word-level errors were stronger predictors of HSK scores, although the discrimination power was stronger for test-takers from L1s that are orthographically dissimilar and economic-geopolitically distant from Mandarin Chinese. These findings suggest that Chinese morphology (i.e., the acquisition of characters and how characters form words) constitutes a unique source of difficulty for L2 learners. We argue that morphological elements should be an important subconstruct in Chinese writing assessments. (200 words)
{"title":"Chinese character matters!: An examination of linguistic accuracy in writing performances on the HSK test","authors":"Xun Yan , Jiani Lin","doi":"10.1016/j.asw.2023.100767","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100767","url":null,"abstract":"<div><p>The orthographic and morphological system of Mandarin Chinese requires more time and developmental stages for learners to acquire. This source of difficulty might present unique challenges and opportunities for writing assessment for Chinese as a Second Language (CSL). This study employed a corpus-based approach to examine the accuracy features of 10,750 essays written by test-takers from 17 first language (L1) backgrounds on the HSK test. Based on both orthographic types and economic-geopolitical factors, we classified test-taker L1s into 3 groups. We first factor-analyzed a comprehensive array of error types to identify the underlying dimensions of Chinese writing accuracy. Then, dimension scores were included in regression models to predict HSK writing scores for different L1 groups. The results revealed five dimensions related to syntactic, morphological, and lexical errors. Among them, dimensions on character and word-level errors were stronger predictors of HSK scores, although the discrimination power was stronger for test-takers from L1s that are orthographically dissimilar and economic-geopolitically distant from Mandarin Chinese. These findings suggest that Chinese morphology (i.e., the acquisition of characters and how characters form words) constitutes a unique source of difficulty for L2 learners. We argue that morphological elements should be an important subconstruct in Chinese writing assessments. (200 words)</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100767"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49817983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100741
Deborah K. Reed , Jing Ma , Hope K. Gerde
To explore potential pandemic-related learning gaps on expressive writing skills, predominantly Hispanic (≈50%) and White (≈30%) primary-grade students responded to grade-specific writing prompts in the fall semesters before and after school closures. Responses were evaluated with an analytic rubric consisting of five traits (focus, organization, development, grammar, mechanics), each scored on a 1–4 scale. Data first were analyzed descriptively and, after propensity score weighting, with ordinal response models (for analytic scores) and generalized linear mixed effects models (for composite scores). Compared to first graders in 2019 (n = 310), those in 2020 (n = 203) scored significantly lower overall as well as on all rubric criteria and were more likely to write unintelligible responses. Second graders in 2020 (n = 194) performed significantly lower than those in 2019 (n = 328) in some traits but not all, and there was a widening gap between students who did/not score proficiently. A three-level longitudinal model analyzing the sample of students moving from first to second grade in fall 2020 (n = 90) revealed significant improvements, but students still performed significantly lower than second graders in the previous year. Implications for student resiliency and instructional planning are discussed.
{"title":"Resiliency and vulnerability in early grades writing performance during the COVID-19 pandemic","authors":"Deborah K. Reed , Jing Ma , Hope K. Gerde","doi":"10.1016/j.asw.2023.100741","DOIUrl":"10.1016/j.asw.2023.100741","url":null,"abstract":"<div><p>To explore potential pandemic-related learning gaps on expressive writing skills, predominantly Hispanic (≈50%) and White (≈30%) primary-grade students responded to grade-specific writing prompts in the fall semesters before and after school closures. Responses were evaluated with an analytic rubric consisting of five traits (focus, organization, development, grammar, mechanics), each scored on a 1–4 scale. Data first were analyzed descriptively and, after propensity score weighting, with ordinal response models (for analytic scores) and generalized linear mixed effects models (for composite scores). Compared to first graders in 2019 (<em>n</em> = 310), those in 2020 (<em>n</em> = 203) scored significantly lower overall as well as on all rubric criteria and were more likely to write unintelligible responses. Second graders in 2020 (<em>n</em> = 194) performed significantly lower than those in 2019 (<em>n</em> = 328) in some traits but not all, and there was a widening gap between students who did/not score proficiently. A three-level longitudinal model analyzing the sample of students moving from first to second grade in fall 2020 (<em>n</em> = 90) revealed significant improvements, but students still performed significantly lower than second graders in the previous year. Implications for student resiliency and instructional planning are discussed.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100741"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10196154/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9537914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100749
Mahmoud Abdi Tabari , Mark D. Johnson
This study examined the use of cohesive features in 270 narrative and argumentative essays produced by 45 s language (L2) students over a semester-long writing course. Multiple regression analyses were conducted to determine the ability of the computational indices of cohesion (TAACO) variables to predict human ratings of essay quality, recognize any differences in the use of cohesive devices between narrative and argumentative genres, and ascertain which of the cohesive devices varied for each of the genres over time. The results indicated clear differences in how cohesion was signaled between the two genres. Narrative texts relied on the use of connective devices to signal cohesion, whereas argumentative texts relied on the use of global-level repetition. With regard to development, the results were less conclusive but do suggest expansion in the participants’ use of cohesive devices. These results provide important implications for L2 writing pedagogy and assessment.
{"title":"Exploring new insights into the role of cohesive devices in written academic genres","authors":"Mahmoud Abdi Tabari , Mark D. Johnson","doi":"10.1016/j.asw.2023.100749","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100749","url":null,"abstract":"<div><p><span>This study examined the use of cohesive features in 270 narrative and argumentative essays produced by 45 s language (L2) students over a semester-long writing course. Multiple </span>regression analyses were conducted to determine the ability of the computational indices of cohesion (TAACO) variables to predict human ratings of essay quality, recognize any differences in the use of cohesive devices between narrative and argumentative genres, and ascertain which of the cohesive devices varied for each of the genres over time. The results indicated clear differences in how cohesion was signaled between the two genres. Narrative texts relied on the use of connective devices to signal cohesion, whereas argumentative texts relied on the use of global-level repetition. With regard to development, the results were less conclusive but do suggest expansion in the participants’ use of cohesive devices. These results provide important implications for L2 writing pedagogy and assessment.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100749"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49817978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100745
Jessie S. Barrot
Recent advances in artificial intelligence have given rise to the use of chatbots as a viable tool for language learning. One such tool is ChatGPT, which engages users in natural and human-like interactive experiences. While ChatGPT has the potential to be an effective tutor and source of language input, some academics have expressed concerns about its impact on writing pedagogy and academic integrity. Thus, this tech review aims to explore the potential benefits and challenges of using ChatGPT for second language (L2) writing. This review concludes with some recommendations for L2 writing classroom practices.
{"title":"Using ChatGPT for second language writing: Pitfalls and potentials","authors":"Jessie S. Barrot","doi":"10.1016/j.asw.2023.100745","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100745","url":null,"abstract":"<div><p>Recent advances in artificial intelligence have given rise to the use of chatbots as a viable tool for language learning. One such tool is ChatGPT, which engages users in natural and human-like interactive experiences. While ChatGPT has the potential to be an effective tutor and source of language input, some academics have expressed concerns about its impact on writing pedagogy and academic integrity. Thus, this tech review aims to explore the potential benefits and challenges of using ChatGPT for second language (L2) writing. This review concludes with some recommendations for L2 writing classroom practices.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100745"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49818286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100730
Kwangmin Lee
While a large body of research has been accumulated that provides reliability and validity evidence for L2 integrated writing tasks, relatively little research has been conducted to examine integrated writing tasks as a means to provide diagnostic insights for teachers and learners. The current study aims to fill in this lacuna by applying a log-linear cognitive diagnostic model (LCDM) to reading-to-write integrated writing data collected from 315 Chinese college-level English as a Foreign Language (EFL) examinees. For this study, the integrated writing task was conceptualized as consisting of language use, source use, and content, with each of these unobservable attributes measured by surrogate indicators. Results showed that all the pairs of postulated attributes were positively correlated. However, the association between language use and content (r = 0.36) was not as strong as that of either language use and source use (r = 0.74) or source use and content (r = 0.90). Also, item parameters indicated that language use is more important than other attributes for obtaining a passing score for writing features. Lastly, the test-taker classification showed that it is impossible to master source use without other attributes, demonstrating the dependence of source use on other attributes. Implications for teaching are discussed.
{"title":"Diagnosing Chinese college-level English as a Foreign Language (EFL) learners’ integrated writing capability: A Log-linear Cognitive Diagnostic Modeling (LCDM) study","authors":"Kwangmin Lee","doi":"10.1016/j.asw.2023.100730","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100730","url":null,"abstract":"<div><p>While a large body of research has been accumulated that provides reliability and validity evidence for L2 integrated writing tasks, relatively little research has been conducted to examine integrated writing tasks as a means to provide diagnostic insights for teachers and learners. The current study aims to fill in this lacuna by applying a log-linear cognitive diagnostic model (LCDM) to reading-to-write integrated writing data collected from 315 Chinese college-level English as a Foreign Language (EFL) examinees. For this study, the integrated writing task was conceptualized as consisting of <em>language use</em>, <em>source use</em>, and <em>content</em>, with each of these unobservable attributes measured by surrogate indicators. Results showed that all the pairs of postulated attributes were positively correlated. However, the association between <em>language use</em> and <em>content</em> (r = 0.36) was not as strong as that of either <em>language use</em> and <em>source use</em> (r = 0.74) or <em>source use</em> and <em>content</em> (r = 0.90). Also, item parameters indicated that <em>language use</em> is more important than other attributes for obtaining a passing score for writing features. Lastly, the test-taker classification showed that it is impossible to master <em>source use</em> without other attributes, demonstrating the dependence of <em>source use</em> on other attributes. Implications for teaching are discussed.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100730"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49858777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100761
Carlton J. Fong , Diane L. Schallert , Zachary H. Williamson , Shengjie Lin , Kyle M. Williams , Young Won Kim
Upon receiving constructive feedback, students may experience unpleasant emotions from critical comments about their writing or the realization that their work is unfinished. Few studies have focused on how learners are able to manage such emotions, one aspect of feedback literacy. Regulating these emotions may involve practicing self-kindness and avoiding self-judgment, two subcomponents of self-compassion. Self-compassionate individuals may move past any feelings of failure and direct their attention to what needs improvement. The question addressed was whether undergraduates’ level of self-compassion would affect their perceptions of the constructiveness of researcher-created feedback statements. At a U.S. southwest university, students (N = 508) rated the constructiveness of 56 statements that had been created to represent different levels of constructiveness in feedback to a fictitious writing assignment. Results indicated that students’ self-kindness positively predicted feedback constructiveness, whereas self-judgment was a negative predictor. Additionally, students higher in self-compassion (high in self-kindness in one analysis and those low in self-judgment in a second) rated the least constructive statements as more constructive than did students low in self-compassion. We end with implications for feedback literacy and writing assessment research and for application of self-compassion in the context of feedback on writing.
{"title":"Are self-compassionate writers more feedback literate? Exploring undergraduates’ perceptions of feedback constructiveness","authors":"Carlton J. Fong , Diane L. Schallert , Zachary H. Williamson , Shengjie Lin , Kyle M. Williams , Young Won Kim","doi":"10.1016/j.asw.2023.100761","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100761","url":null,"abstract":"<div><p>Upon receiving constructive feedback, students may experience unpleasant emotions from critical comments about their writing or the realization that their work is unfinished. Few studies have focused on how learners are able to manage such emotions, one aspect of feedback literacy. Regulating these emotions may involve practicing self-kindness and avoiding self-judgment, two subcomponents of self-compassion. Self-compassionate individuals may move past any feelings of failure and direct their attention to what needs improvement. The question addressed was whether undergraduates’ level of self-compassion would affect their perceptions of the constructiveness of researcher-created feedback statements. At a U.S. southwest university, students (<em>N</em> = 508) rated the constructiveness of 56 statements that had been created to represent different levels of constructiveness in feedback to a fictitious writing assignment. Results indicated that students’ self-kindness positively predicted feedback constructiveness, whereas self-judgment was a negative predictor. Additionally, students higher in self-compassion (high in self-kindness in one analysis and those low in self-judgment in a second) rated the least constructive statements as more constructive than did students low in self-compassion. We end with implications for feedback literacy and writing assessment research and for application of self-compassion in the context of feedback on writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100761"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49818279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100765
Jianling Liao
As writing on computers has become increasingly common in L2 assessment and learning activities, it is crucial to understand the mediation effects induced by the computer on writing performance and to compare them with those of handwriting. This is especially important for L2 Chinese learning, given that handwriting characters has been claimed to play an essential role in the development of Chinese literacy. The current study extends the scope of writing modality investigation by examining the linguistic, metadiscourse, and organizational properties of handwritten and typed essays by L2 Chinese learners. Furthermore, predictors of holistic ratings of writing quality were identified in the two modes to understand whether the focal points of raters’ evaluations may differ between the two mediums. The results yielded moderate to strong evidence about how the two modalities allow for distinct affordances, interact differently with the L2 (i.e., Chinese), and consequently affect writing performance in various dimensions.
{"title":"What skills are being assessed? Evaluating L2 Chinese essays written by hand and on a computer keyboard","authors":"Jianling Liao","doi":"10.1016/j.asw.2023.100765","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100765","url":null,"abstract":"<div><p>As writing on computers has become increasingly common in L2 assessment and learning activities, it is crucial to understand the mediation effects induced by the computer on writing performance and to compare them with those of handwriting. This is especially important for L2 Chinese learning, given that handwriting characters has been claimed to play an essential role in the development of Chinese literacy. The current study extends the scope of writing modality investigation by examining the linguistic, metadiscourse, and organizational properties of handwritten and typed essays by L2 Chinese learners. Furthermore, predictors of holistic ratings of writing quality were identified in the two modes to understand whether the focal points of raters’ evaluations may differ between the two mediums. The results yielded moderate to strong evidence about how the two modalities allow for distinct affordances, interact differently with the L2 (i.e., Chinese), and consequently affect writing performance in various dimensions.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100765"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49817933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100768
Mari Honko , Reeta Neittaanmäki , Scott Jarvis , Ari Huhta
This study investigated how common raters’ experiences of uncertainty in high-stakes testing are before, during, and after the rating of writing performances, what these feelings of uncertainty are, and what reasons might underlie such feelings. We also examined if uncertainty was related to raters’ rating experience or to the quality of their ratings. The data were gathered from the writing raters (n = 23) in the Finnish National Certificates of Proficiency, a standardized Finnish high-stakes language examination. The data comprise 12,118 ratings as well as raters’ survey responses and notes during rating sessions. The responses were analyzed by using thematic content analysis and the ratings by descriptive statistics and Many-Facets Rasch analyses. The results show that uncertainty is variable and individual, and that even highly experienced raters can feel unsure about (some of) their ratings. However, uncertainty was not related to rating quality (consistency or severity/leniency). Nor did uncertainty diminish with growing experience. Uncertainty during actual ratings was typically associated with the characteristics of the rated performances but also with other, more general and rater-related or situational factors. Other reasons external to the rating session were also identified for uncertainty, such as those related to the raters themselves. An analysis of the double-rated performances shows that although similar performance-related reasons seemed to cause uncertainty for different raters, their uncertainty was largely associated with different test-takers’ performances. While uncertainty can be seen as a natural part of holistic ratings in high-stakes tests, the study shows that even if uncertainty is not associated with the quality of ratings, we should constantly seek ways to address uncertainty in language testing, for example by developing rating scales and rater training. This may make raters’ work easier and less burdensome.
{"title":"Beyond literacy and competency – The effects of raters’ perceived uncertainty on assessment of writing","authors":"Mari Honko , Reeta Neittaanmäki , Scott Jarvis , Ari Huhta","doi":"10.1016/j.asw.2023.100768","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100768","url":null,"abstract":"<div><p>This study investigated how common raters’ experiences of uncertainty in high-stakes testing are before, during, and after the rating of writing performances, what these feelings of uncertainty are, and what reasons might underlie such feelings. We also examined if uncertainty was related to raters’ rating experience or to the quality of their ratings. The data were gathered from the writing raters (n = 23) in the Finnish National Certificates of Proficiency, a standardized Finnish high-stakes language examination. The data comprise 12,118 ratings as well as raters’ survey responses and notes during rating sessions. The responses were analyzed by using thematic content analysis and the ratings by descriptive statistics and Many-Facets Rasch analyses. The results show that uncertainty is variable and individual, and that even highly experienced raters can feel unsure about (some of) their ratings. However, uncertainty was not related to rating quality (consistency or severity/leniency). Nor did uncertainty diminish with growing experience. Uncertainty during actual ratings was typically associated with the characteristics of the rated performances but also with other, more general and rater-related or situational factors. Other reasons external to the rating session were also identified for uncertainty, such as those related to the raters themselves. An analysis of the double-rated performances shows that although similar performance-related reasons seemed to cause uncertainty for different raters, their uncertainty was largely associated with different test-takers’ performances. While uncertainty can be seen as a natural part of holistic ratings in high-stakes tests, the study shows that even if uncertainty is not associated with the quality of ratings, we should constantly seek ways to address uncertainty in language testing, for example by developing rating scales and rater training. This may make raters’ work easier and less burdensome.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100768"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49817982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In writing instruction, specifying writing assignments in terms of purpose, audience, and medium is considered good practice. Earlier studies that found positive effects of such rhetorical specification were usually conducted with older participants. The benefits of rhetorical specification for novice writers are not yet clear, especially in the context of assessing writing. Thus, this study examined the effects of rhetorical specification on text quality of descriptions in an assessment prompt for fourth graders. Austrian fourth graders were assessed with the same paper-pencil-based L1-writing prompt but were randomly assigned within classrooms to one of three different conditions: high-level rhetorical specification (n = 78), medium-level rhetorical specification (n = 44), or no rhetorical specification (n = 44). The texts written by participants were rated holistically and analytically. The analysis revealed no differences between texts written by students under these three different conditions of rhetorical specification levels except for one single analytic indicator of text quality. Texts written in response to medium-level rhetorical specification scored higher on the rating of the criterion Adaptation to the audience than texts written under the other two conditions. The pros and cons of (high-level) rhetorical specification and good assessment practice with novice writers are being discussed in the findings.
{"title":"Assessing writing in fourth grade: Rhetorical specification effects on text quality","authors":"Ilka Tabea Fladung , Sophie Gruhn , Veronika Österbauer , Jörg Jost","doi":"10.1016/j.asw.2023.100764","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100764","url":null,"abstract":"<div><p>In writing instruction, specifying writing assignments in terms of purpose, audience, and medium is considered good practice. Earlier studies that found positive effects of such <em>rhetorical specification</em> were usually conducted with older participants. The benefits of rhetorical specification for novice writers are not yet clear, especially in the context of assessing writing. Thus, this study examined the effects of rhetorical specification on text quality of descriptions in an assessment prompt for fourth graders. Austrian fourth graders were assessed with the same paper-pencil-based L1-writing prompt but were randomly assigned within classrooms to one of three different conditions: high-level rhetorical specification (<em>n</em> = 78), medium-level rhetorical specification (<em>n</em> = 44), or no rhetorical specification (<em>n</em> = 44). The texts written by participants were rated holistically and analytically. The analysis revealed no differences between texts written by students under these three different conditions of rhetorical specification levels except for one single analytic indicator of text quality. Texts written in response to medium-level rhetorical specification scored higher on the rating of the criterion <em>Adaptation to the audience</em> than texts written under the other two conditions. The pros and cons of (high-level) rhetorical specification and good assessment practice with novice writers are being discussed in the findings.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100764"},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49817984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}