Pub Date : 2024-02-16DOI: 10.1016/j.asw.2024.100827
Khaled Barkaoui
Picture description (PD) tasks are widely used in second language (L2) teaching, research, and assessment to elicit and/or evaluate L2 learners’ writing performance. However, there is little research on the effects of the characteristics of such tasks on L2 writing performance. This study aimed to examine the effects of task difficulty and learner variables on the linguistic characteristics of responses to the 2020 version of the Duolingo English Test (DET) PD writing tasks. The written responses of 1439 test takers from four first language (L1) backgrounds and at different levels of L2 proficiency to 335 DET PD tasks at two levels of difficulty were analyzed and compared in terms of measures of fluency, accuracy, and complexity using various computer programs. The findings indicated that task difficulty did not affect writing performance significantly; high-proficiency learners tended to perform better than did their low-proficiency counterparts; responses receiving higher scores tended to be longer and more accurate; and learner L1 was significantly associated with writing grades and linguistic features. The findings and their implications for the DET PD task and the DET validity argument are discussed.
图片描述(PD)任务被广泛应用于第二语言(L2)教学、研究和评估中,以激发和/或评估 L2 学习者的写作表现。然而,有关此类任务的特点对第二语言写作成绩的影响的研究却很少。本研究旨在考察任务难度和学习者变量对2020年版Duolingo英语测试(DET)PD写作任务回答的语言特点的影响。研究使用不同的计算机程序,对来自四种第一语言(L1)背景和不同第二语言水平的1439名应试者对335个两种难度的DET PD任务的书面回答进行了分析,并从流畅性、准确性和复杂性等方面进行了比较。研究结果表明,任务难度对写作成绩的影响不大;高水平的学习者往往比低水平的学习者表现得更好;得分较高的回答往往更长、更准确;学习者的 L1 与写作成绩和语言特点有很大关系。本文讨论了研究结果及其对 DET PD 任务和 DET 有效性论证的影响。
{"title":"Exploring the effects of task difficulty and learner variables on performance on picture description writing tasks","authors":"Khaled Barkaoui","doi":"10.1016/j.asw.2024.100827","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100827","url":null,"abstract":"<div><p>Picture description (PD) tasks are widely used in second language (L2) teaching, research, and assessment to elicit and/or evaluate L2 learners’ writing performance. However, there is little research on the effects of the characteristics of such tasks on L2 writing performance. This study aimed to examine the effects of task difficulty and learner variables on the linguistic characteristics of responses to the 2020 version of the Duolingo English Test (DET) PD writing tasks. The written responses of 1439 test takers from four first language (L1) backgrounds and at different levels of L2 proficiency to 335 DET PD tasks at two levels of difficulty were analyzed and compared in terms of measures of fluency, accuracy, and complexity using various computer programs. The findings indicated that task difficulty did not affect writing performance significantly; high-proficiency learners tended to perform better than did their low-proficiency counterparts; responses receiving higher scores tended to be longer and more accurate; and learner L1 was significantly associated with writing grades and linguistic features. The findings and their implications for the DET PD task and the DET validity argument are discussed.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100827"},"PeriodicalIF":3.9,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139743711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-02DOI: 10.1016/j.asw.2024.100809
Vicent Beltrán-Palanques
Research into the contribution of multimodality to language learning is gaining momentum. While most studies pave the way for new understandings of language teaching and learning, there is an increasing demand for comprehensive assessment practices, particularly within higher education contexts. A few studies have emphasized the importance of reflecting on and establishing criteria for the assessment of multimodal literacy. This is necessary to understand students’ contributions in detail and to provide them with effective support in developing their multimodal skills. This study discusses the assessment of multimodal writing in English for Specific Purposes (ESP) contexts. It presents the design of an analytical tool for assessing multimodal texts and provided an example of its application. This tool covers assessment categories such as language use, content expression, interpersonal meaning, multimodality, and creativity and originality. As an example, we focus on the multimodal writing of a video game narrative, a genre that requires the integration of multiple modes of communication to convey meaning more effectively. Finally, this study offers pedagogical insights into the assessment of multimodal literacy in ESP.
{"title":"Assessing video game narratives: Implications for the assessment of multimodal literacy in ESP","authors":"Vicent Beltrán-Palanques","doi":"10.1016/j.asw.2024.100809","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100809","url":null,"abstract":"<div><p>Research into the contribution of multimodality to language learning is gaining momentum. While most studies pave the way for new understandings of language teaching and learning, there is an increasing demand for comprehensive assessment practices, particularly within higher education contexts. A few studies have emphasized the importance of reflecting on and establishing criteria for the assessment of multimodal literacy. This is necessary to understand students’ contributions in detail and to provide them with effective support in developing their multimodal skills. This study discusses the assessment of multimodal writing in English for Specific Purposes (ESP) contexts. It presents the design of an analytical tool for assessing multimodal texts and provided an example of its application. This tool covers assessment categories such as language use, content expression, interpersonal meaning, multimodality, and creativity and originality. As an example, we focus on the multimodal writing of a video game narrative, a genre that requires the integration of multiple modes of communication to convey meaning more effectively. Finally, this study offers pedagogical insights into the assessment of multimodal literacy in ESP.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100809"},"PeriodicalIF":3.9,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000023/pdfft?md5=eaba395b3bc5d3f49c5ee81b441c065b&pid=1-s2.0-S1075293524000023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139674270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines secondary education students’ multilingual writing based on two input modalities to understand the influence of input on argumentation and sourcing strategies. Participants produced texts in Basque, Spanish and English based on a video or a text, and texts were analysed to explore their production of argumentation elements and sourcing strategies. Differences were found between input modalities in the use of data and rebuttals, and in copying. Across-language differences were also found in the use of data and counterarguments, and in the use of original ideas and paraphrasing. Additionally, complex argumentation elements elicited more original ideas than simple ones. Findings suggest that different writing sub-processes might be activated when composing from different sources and that argumentation and sourcing might be transferable across languages. These results may have important implications for educators in multilingual programs who aim to support their students in acquiring academic writing skills in multiple languages.
{"title":"A comparison between input modalities and languages in source-based multilingual argumentative writing","authors":"Roberto Arias-Hermoso, Ainara Imaz Agirre, Eneritz Garro Larrañaga","doi":"10.1016/j.asw.2024.100813","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100813","url":null,"abstract":"<div><p><span>This paper examines secondary education<span> students’ multilingual writing based on two input modalities to understand the influence of input on argumentation and sourcing strategies. Participants produced texts in </span></span>Basque, Spanish and English based on a video or a text, and texts were analysed to explore their production of argumentation elements and sourcing strategies. Differences were found between input modalities in the use of data and rebuttals, and in copying. Across-language differences were also found in the use of data and counterarguments, and in the use of original ideas and paraphrasing. Additionally, complex argumentation elements elicited more original ideas than simple ones. Findings suggest that different writing sub-processes might be activated when composing from different sources and that argumentation and sourcing might be transferable across languages. These results may have important implications for educators in multilingual programs who aim to support their students in acquiring academic writing skills in multiple languages.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100813"},"PeriodicalIF":3.9,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.asw.2024.100810
Jifeng Wu , Xiaofei Lu
This study examined differences in the overall noun phrase (NP) complexity and the use of specific types of NP modifiers among Korean learners of Chinese as a second language (CSL) at different proficiency levels and first language (L1) Chinese speakers. Our data consisted of a corpus of 134 narratives produced by 103 Korean CSL learners (33 beginner, 37 intermediate, and 33 advanced) and 31 L1 Chinese speakers on the same topic. Each narrative was manually analyzed using seven measures of overall NP complexity and 26 measures based on specific types of NP modifiers. Two measures of overall NP complexity (i.e., complex NP ratio and total length of all complex premodifiers) significantly discriminated all learner proficiency levels. The frequency of several types of NP modifiers consistently decreased or increased from lower to higher proficiency levels, while that of several others showed cross-proficiency fluctuations. Advanced learners demonstrated comparable levels of overall NP complexity as well as similar frequency of usage of specific types of NP modifiers as L1 Chinese speakers, with the exception that the latter used significantly more multiple modifiers. Our findings have useful implications for L2 Chinese writing syntactic complexity research and L2 Chinese writing pedagogy.
{"title":"Noun phrase complexity and second language Chinese proficiency: An analysis of narratives by Korean learners of Chinese","authors":"Jifeng Wu , Xiaofei Lu","doi":"10.1016/j.asw.2024.100810","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100810","url":null,"abstract":"<div><p>This study examined differences in the overall noun phrase (NP) complexity and the use of specific types of NP modifiers among Korean learners of Chinese as a second language (CSL) at different proficiency levels and first language (L1) Chinese speakers. Our data consisted of a corpus of 134 narratives produced by 103 Korean CSL learners (33 beginner, 37 intermediate, and 33 advanced) and 31 L1 Chinese speakers on the same topic. Each narrative was manually analyzed using seven measures of overall NP complexity and 26 measures based on specific types of NP modifiers. Two measures of overall NP complexity (i.e., complex NP ratio and total length of all complex premodifiers) significantly discriminated all learner proficiency levels. The frequency of several types of NP modifiers consistently decreased or increased from lower to higher proficiency levels, while that of several others showed cross-proficiency fluctuations. Advanced learners demonstrated comparable levels of overall NP complexity as well as similar frequency of usage of specific types of NP modifiers as L1 Chinese speakers, with the exception that the latter used significantly more multiple modifiers. Our findings have useful implications for L2 Chinese writing syntactic complexity research and L2 Chinese writing pedagogy.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"59 ","pages":"Article 100810"},"PeriodicalIF":3.9,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000035/pdfft?md5=78fe5dfebdb0aa9f7137263e1abc7b2c&pid=1-s2.0-S1075293524000035-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.asw.2024.100811
Mark Feng Teng , Maggie Ma
Student feedback literacy is essential to writing in a foreign language context. Feedback is a process through which learners become more familiar with their work quality and integrate constructive criticism to produce stronger written work. Implicit in this process is the assumption that learners should not passively receive information but actively seek, generate, process, and use feedback to apply new knowledge in current or subsequent writing tasks. Learners’ metacognitive awareness and skills may influence their participation in feedback-related activities that inform the monitoring and control of their writing process and determine their writing performance. However, little attention has been given to a systematic investigation of metacognition in feedback literacy. The present study, drawing upon a self-regulatory perspective, bridges this gap by fulfilling two purposes: (a) to measure a scale on metacognition-based student feedback literacy; and (b) to delineate the predictive effects of different components of the scale on academic writing performance. The results provided evidence for metacognitive awareness and skills in student feedback literacy and showed the predictive effects of the scale, e.g., feedback-related strategies in participation (FRSP), motivation (M), feedback-related monitoring strategies (FRMS), and strategy knowledge (SK) on EFL learners’ performance in academic writing. Relevant implications were also discussed.
{"title":"Assessing metacognition-based student feedback literacy for academic writing","authors":"Mark Feng Teng , Maggie Ma","doi":"10.1016/j.asw.2024.100811","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100811","url":null,"abstract":"<div><p>Student feedback literacy is essential to writing in a foreign language context. Feedback is a process through which learners become more familiar with their work quality and integrate constructive criticism to produce stronger written work. Implicit in this process is the assumption that learners should not passively receive information but actively seek, generate, process, and use feedback to apply new knowledge in current or subsequent writing tasks. Learners’ metacognitive awareness and skills may influence their participation in feedback-related activities that inform the monitoring and control of their writing process and determine their writing performance. However, little attention has been given to a systematic investigation of metacognition in feedback literacy. The present study, drawing upon a self-regulatory perspective, bridges this gap by fulfilling two purposes: (a) to measure a scale on metacognition-based student feedback literacy; and (b) to delineate the predictive effects of different components of the scale on academic writing performance. The results provided evidence for metacognitive awareness and skills in student feedback literacy and showed the predictive effects of the scale, e.g., feedback-related strategies in participation (FRSP), motivation (M), feedback-related monitoring strategies (FRMS), and strategy knowledge (SK) on EFL learners’ performance in academic writing. Relevant implications were also discussed.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"59 ","pages":"Article 100811"},"PeriodicalIF":3.9,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000047/pdfft?md5=bab5f2c98aefcabe1eebf93cef0f6bd3&pid=1-s2.0-S1075293524000047-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139674633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.asw.2024.100808
Solaire A. Finkenstaedt-Quinn , Field M. Watts , Ginger V. Shultz
Studies examining peer review demonstrate that students can learn from giving feedback to and receiving feedback from their peers, especially when they utilize information gained from the review process to revise. However, much of the research on peer review is situated within the literature regarding how students learn to write. With an increasing use of writing-to-learn in STEM classrooms, it is important to study how students engage in peer review for these types of writing assignments. This study sought to better understand how peer review and revision can support student learning for writing-to-learn specifically, using the lenses of cognitive perspectives of writing and engagement with written corrective feedback. Using a case study approach, we provide a detailed analysis of six students’ written artifacts in response to a writing-to-learn assignment that incorporated peer review and revision implemented in an organic chemistry course. Students demonstrated a range in the types of revisions they made and the extent to which the peer review process informed their revisions. Additionally, students exhibited surface, mid-level, and active engagement with the peer review and revision process. Considering the different engagement levels can inform how we frame peer review to students when using it as an instructional practice.
{"title":"Reading, receiving, revising: A case study on the relationship between peer review and revision in writing-to-learn","authors":"Solaire A. Finkenstaedt-Quinn , Field M. Watts , Ginger V. Shultz","doi":"10.1016/j.asw.2024.100808","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100808","url":null,"abstract":"<div><p>Studies examining peer review demonstrate that students can learn from giving feedback to and receiving feedback from their peers, especially when they utilize information gained from the review process to revise. However, much of the research on peer review is situated within the literature regarding how students learn to write. With an increasing use of writing-to-learn in STEM classrooms, it is important to study how students engage in peer review for these types of writing assignments. This study sought to better understand how peer review and revision can support student learning for writing-to-learn specifically, using the lenses of cognitive perspectives of writing and engagement with written corrective feedback. Using a case study approach, we provide a detailed analysis of six students’ written artifacts in response to a writing-to-learn assignment that incorporated peer review and revision implemented in an organic chemistry course. Students demonstrated a range in the types of revisions they made and the extent to which the peer review process informed their revisions. Additionally, students exhibited surface, mid-level, and active engagement with the peer review and revision process. Considering the different engagement levels can inform how we frame peer review to students when using it as an instructional practice.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"59 ","pages":"Article 100808"},"PeriodicalIF":3.9,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000011/pdfft?md5=9db33da6b3006af1a4e320b31a5c7c6d&pid=1-s2.0-S1075293524000011-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139505460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-26DOI: 10.1016/j.asw.2023.100806
Nikola Dobrić
The study reported in the paper starts with a hypothesis that errors observable in writing performances can account for much of the variability of the ratings awarded to them. The assertion is that this may be the case even when prescribed rating criteria explicitly direct rater focus towards successfully performed aspects of a writing performance rather than towards errors. The hypothesis is tested on a sample of texts rated independently of the study, using a five-point analytic rating scale involving ‘Can do’-like descriptors. The correlation between errors and ratings is ascertained using ordinal logistic regression, with Pseudo R2 of 0.51 discerned overall. Thus, with roughly 50% of score variability explainable by error occurrences, the stated hypothesis is considered confirmed. The study goes on to discuss the consequences of the findings and their potential employ in assessment of writing beyond the local assessment context.
{"title":"Effects of errors on ratings of writing performances – Evidence from a high-stakes exam","authors":"Nikola Dobrić","doi":"10.1016/j.asw.2023.100806","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100806","url":null,"abstract":"<div><p>The study reported in the paper starts with a hypothesis that errors observable in writing performances can account for much of the variability of the ratings awarded to them. The assertion is that this may be the case even when prescribed rating criteria explicitly direct rater focus towards successfully performed aspects of a writing performance rather than towards errors. The hypothesis is tested on a sample of texts rated independently of the study, using a five-point analytic rating scale involving ‘Can do’-like descriptors. The correlation between errors and ratings is ascertained using ordinal logistic regression, with Pseudo R<sup>2</sup> of 0.51 discerned overall. Thus, with roughly 50% of score variability explainable by error occurrences, the stated hypothesis is considered confirmed. The study goes on to discuss the consequences of the findings and their potential employ in assessment of writing beyond the local assessment context.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"59 ","pages":"Article 100806"},"PeriodicalIF":3.9,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293523001149/pdfft?md5=9940df3b488b638b23d71e6a3eee3a37&pid=1-s2.0-S1075293523001149-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139050140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-17DOI: 10.1016/j.asw.2023.100804
Hanieh Shafiee Rad , Rasoul Alipour
As a prerequisite for effective teaching and learning outcomes, assessment literacy (AL) is imperative for both writing teachers and students. Although previous research has stressed the importance of AL in effective writing instruction, few studies have designed or explored interventions that can improve teachers' and students' AL. These interventions play a crucial role in equipping writing teachers with the necessary skills to effectively comprehend, apply, interpret, and critique assessments. Likewise, AL interventions are essential for writing students, as they enhance their knowledge, attitudes, actions, and critique of assessments, enabling them to understand assessment purposes, utilize feedback constructively, and critically evaluate assessment practices. In order to address this gap in the literature, the researchers employed a mixed-method approach, which included pre- and post-tests, as well as semistructured interviews, to design and investigate effective interventions for enhancing AL. For the purpose of this study, we incorporated a four-dimensional conceptual framework for teachers' intervention and a four-phase AL framework for students' intervention as the basis for our investigation. According to the study findings, there were significant disparities between students' writing skills and their writing abilities while also substantial improvements in teachers' writing abilities following intervention. A positive perception of the intervention was also reported by both students and teachers. Thus, these interventions were able to assist students in better understanding assessment criteria and help teachers provide feedback in an effective manner. By implementing effective interventions, students, as well as teachers, will be equipped with the necessary tools to support writing instruction and ultimately achieve higher levels of achievement.
作为有效教学和学习成果的先决条件,写作教师和学生都必须具备评估素养(AL)。尽管以往的研究已经强调了评估素养在有效写作教学中的重要性,但很少有研究设计或探索能够提高教师和学生评估素养的干预措施。这些干预措施在帮助写作教师掌握有效理解、应用、解释和批判评估所需的技能方面起着至关重要的作用。同样,AL 干预措施对于写作学生来说也是至关重要的,因为这些措施可以增强他们对评估的认识、态度、行动和批判,使他们能够理解评估的目的,建设性地利用反馈,并批判性地评价评估实践。为了解决文献中的这一空白,研究人员采用了一种混合方法,包括前测和后测,以及半结构式访谈,来设计和研究提高 AL 的有效干预措施。在本研究中,我们将教师干预的四维概念框架和学生干预的四阶段 AL 框架作为调查的基础。研究结果表明,学生的写作能力与教师的写作能力之间存在着明显的差距,而教师的写作能力在干预后也有了很大的提高。学生和教师对干预措施也有积极的看法。因此,这些干预措施能够帮助学生更好地理解评价标准,并帮助教师以有效的方式提供反馈。通过实施有效的干预措施,学生和教师将掌握必要的工具来支持写作教学,并最终取得更高的成绩。
{"title":"Unlocking writing success: Building assessment literacy for students and teachers through effective interventions","authors":"Hanieh Shafiee Rad , Rasoul Alipour","doi":"10.1016/j.asw.2023.100804","DOIUrl":"10.1016/j.asw.2023.100804","url":null,"abstract":"<div><p>As a prerequisite for effective teaching and learning outcomes, assessment literacy (AL) is imperative for both writing teachers and students. Although previous research has stressed the importance of AL in effective writing instruction, few studies have designed or explored interventions that can improve teachers' and students' AL. These interventions play a crucial role in equipping writing teachers with the necessary skills to effectively comprehend, apply, interpret, and critique assessments. Likewise, AL interventions are essential for writing students, as they enhance their knowledge, attitudes, actions, and critique of assessments, enabling them to understand assessment purposes, utilize feedback constructively, and critically evaluate assessment practices. In order to address this gap in the literature, the researchers employed a mixed-method approach, which included pre- and post-tests, as well as semistructured interviews, to design and investigate effective interventions for enhancing AL. For the purpose of this study, we incorporated a four-dimensional conceptual framework for teachers' intervention and a four-phase AL framework for students' intervention as the basis for our investigation. According to the study findings, there were significant disparities between students' writing skills and their writing abilities while also substantial improvements in teachers' writing abilities following intervention. A positive perception of the intervention was also reported by both students and teachers. Thus, these interventions were able to assist students in better understanding assessment criteria and help teachers provide feedback in an effective manner. By implementing effective interventions, students, as well as teachers, will be equipped with the necessary tools to support writing instruction and ultimately achieve higher levels of achievement.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"59 ","pages":"Article 100804"},"PeriodicalIF":3.9,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293523001125/pdfft?md5=af49e9a93aef41309bcf0e49d8949672&pid=1-s2.0-S1075293523001125-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138689637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}