首页 > 最新文献

Assessing Writing最新文献

英文 中文
Influence of prior educational contexts on directed self-placement of L2 writers 先前的教育背景对第二语言作家定向自我定位的影响
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-07-01 DOI: 10.1016/j.asw.2024.100870
Youmie J. Kim, Matthew J. Hammill

Directed self-placement (DSP) allows for student agency in writing placement. DSP has been implemented in many composition programs, although it has not been used as widely for L2 writers in higher education. This study investigates the relationship between student placement decisions and students’ prior educational backgrounds, particularly in relationship to whether they had attended an English-medium high school or an intensive English program (IEP). Actual placement results via an exam were compared to 804 students’ self-placement decisions and correlated with their prior educational backgrounds. Findings indicated that most students’ DSP decisions matched actual exam placement results. However, there was a large number of DSP decisions that were higher or lower than exam placement results. Additionally, the longer students studied at an English-medium instruction high school, the more likely they were to place themselves higher than their exam placement. We conclude that DSP can be used in L2 writing programs, but with careful attention to learners’ educational backgrounds, proficiency, and sense of identity.

定向自主写作(DSP)允许学生自主安排写作。DSP 已在许多作文课程中实施,但尚未在高等教育中广泛用于 L2 写作。本研究调查了学生的分班决定与学生之前的教育背景之间的关系,特别是与他们是否曾就读于英语授课的高中或英语强化课程(IEP)之间的关系。通过考试得出的实际分班结果与 804 名学生的自我分班决定进行了比较,并与他们之前的教育背景进行了关联。研究结果表明,大多数学生的 DSP 决定与实际考试分班结果相符。然而,也有大量的 DSP 决定高于或低于考试分班结果。此外,学生在以英语授课的高中学习的时间越长,他们越有可能将自己的排名高于考试排名。我们的结论是,DSP 可以用于 L2 写作课程,但要注意学习者的教育背景、水平和认同感。
{"title":"Influence of prior educational contexts on directed self-placement of L2 writers","authors":"Youmie J. Kim,&nbsp;Matthew J. Hammill","doi":"10.1016/j.asw.2024.100870","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100870","url":null,"abstract":"<div><p>Directed self-placement (DSP) allows for student agency in writing placement. DSP has been implemented in many composition programs, although it has not been used as widely for L2 writers in higher education. This study investigates the relationship between student placement decisions and students’ prior educational backgrounds, particularly in relationship to whether they had attended an English-medium high school or an intensive English program (IEP). Actual placement results via an exam were compared to 804 students’ self-placement decisions and correlated with their prior educational backgrounds. Findings indicated that most students’ DSP decisions matched actual exam placement results. However, there was a large number of DSP decisions that were higher or lower than exam placement results. Additionally, the longer students studied at an English-medium instruction high school, the more likely they were to place themselves higher than their exam placement. We conclude that DSP can be used in L2 writing programs, but with careful attention to learners’ educational backgrounds, proficiency, and sense of identity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100870"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"Navigating innovation and equity in writing assessment" "在写作评估的创新与公平中导航"
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-07-01 DOI: 10.1016/j.asw.2024.100873
Kelly Hartwell , Laura Aull

The 2024 Tools & Technology forum underscores the significant role of emerging writing technologies in shaping writing assessment practices post-COVID-19, emphasizing the necessity of ensuring that these innovations uphold core principles of validity, fairness, and equity. AI-driven tools offer promising improvements but also require careful consideration to ensure that they reflect writing constructs, align with educational goals, and promote equitable assessment practices. Validity is explored through dimensions such as construct, content, and consequential validity, raising questions about how assessment tools may capture the complexity of writing and their broader impacts on educational stakeholders. Fairness in writing assessment is examined with regard to cultural responsiveness and accessibility, and how assessment tools may be designed to accommodate various student needs. Equity extends these considerations by addressing systemic inequities and promoting assessment practices that support diverse learning styles and reduce barriers for marginalized students. The reviews of three assessment tools—PERSUADE 2.0, EvaluMate, and a web application for systematic review writing—illustrate how innovations can support valid, fair, and equitable writing assessments across educational contexts. The forum emphasizes the importance of ongoing dialogue and adaptation to create inclusive and just educational experiences.

2024工具与技术论坛强调了新兴写作技术在COVID-19后写作评估实践中的重要作用,强调了确保这些创新坚持有效性、公平性和公正性等核心原则的必要性。人工智能驱动的工具提供了大有可为的改进,但也需要仔细考虑,以确保它们能反映写作结构,与教育目标保持一致,并促进公平的评估实践。我们从建构有效性、内容有效性和结果有效性等方面对有效性进行了探讨,提出了关于评估工具如何捕捉写作的复杂性及其对教育利益相关者的广泛影响的问题。写作评估的公平性将从文化响应性和可及性,以及如何设计评估工具以满足不同学生的需求等方面进行研究。公平性则通过解决系统性的不公平和促进支持多样化学习方式的评估实践,以及减少边缘化学生的障碍来扩展这些考虑因素。对三个评估工具--PERSUADE 2.0、EvaluMate 和一个用于系统性评论写作的网络应用程序--的评论说明了创新如何能够支持跨教育环境的有效、公平和公正的写作评估。论坛强调了持续对话和调整对创造全纳和公正教育体验的重要性。
{"title":"\"Navigating innovation and equity in writing assessment\"","authors":"Kelly Hartwell ,&nbsp;Laura Aull","doi":"10.1016/j.asw.2024.100873","DOIUrl":"10.1016/j.asw.2024.100873","url":null,"abstract":"<div><p>The 2024 Tools &amp; Technology forum underscores the significant role of emerging writing technologies in shaping writing assessment practices post-COVID-19, emphasizing the necessity of ensuring that these innovations uphold core principles of validity, fairness, and equity. AI-driven tools offer promising improvements but also require careful consideration to ensure that they reflect writing constructs, align with educational goals, and promote equitable assessment practices. Validity is explored through dimensions such as construct, content, and consequential validity, raising questions about how assessment tools may capture the complexity of writing and their broader impacts on educational stakeholders. Fairness in writing assessment is examined with regard to cultural responsiveness and accessibility, and how assessment tools may be designed to accommodate various student needs. Equity extends these considerations by addressing systemic inequities and promoting assessment practices that support diverse learning styles and reduce barriers for marginalized students. The reviews of three assessment tools—PERSUADE 2.0, EvaluMate, and a web application for systematic review writing—illustrate how innovations can support valid, fair, and equitable writing assessments across educational contexts. The forum emphasizes the importance of ongoing dialogue and adaptation to create inclusive and just educational experiences.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100873"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of peer feedback in English writing classes on EFL students’ writing feedback literacy 英语写作课上的同伴反馈对 EFL 学生写作反馈素养的影响
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-07-01 DOI: 10.1016/j.asw.2024.100874
Fanrong Weng , Cecilia Guanfang Zhao , Shangwen Chen

Despite the increasing scholarly attention towards students’ writing feedback literacy in recent years, empirical explorations of effective approaches to enhancing this capacity remain scarce. While peer feedback often plays an important role in English as a Foreign Language (EFL) writing development, few studies seem to have addressed the potential impacts of peer feedback activities on students’ overall writing feedback literacy. To fill this gap, a mixed-methods study was designed to investigate the effect of peer feedback activities on students’ writing feedback literacy development across such dimensions as appreciating feedback, making judgements, acknowledging different sources of feedback, managing affect, and taking actions with feedback. Two intact classes, one as the experimental group and the other control group, participated in the study. The experimental group engaged in peer feedback activities during the semester (12 weeks), whereas the control group received conventional teacher feedback only. The pre- and post-intervention results based on a writing feedback literacy scale were compared between the two groups, in addition to the analysis of interviews with the teacher and focal students from the experimental group, as well as students’ written assignments and revisions after receiving peer feedback. Results showed that peer feedback activities could significantly improve students’ appreciation of feedback and their ability to make judgements. Nevertheless, no significant changes in other dimensions were identified. These findings extend the current understanding of EFL students’ writing feedback literacy and hold valuable pedagogical implications.

尽管近年来学术界对学生写作反馈素养的关注与日俱增,但对提高这种能力的有效方法的实证探索仍然很少。虽然同伴反馈通常在英语作为外语(EFL)的写作发展中扮演着重要角色,但似乎很少有研究涉及同伴反馈活动对学生整体写作反馈素养的潜在影响。为了填补这一空白,我们设计了一项混合方法研究,以调查同伴反馈活动对学生写作反馈素养发展的影响,包括欣赏反馈、做出判断、承认不同的反馈来源、管理情感以及根据反馈采取行动等方面。参加研究的是两个完整的班级,一个是实验组,另一个是对照组。实验组在学期(12 周)内参与同伴反馈活动,而对照组只接受传统的教师反馈。除了对实验组教师和重点学生的访谈、学生的书面作业和接受同伴反馈后的修改进行分析外,还根据写作反馈素养量表对两组干预前后的结果进行了比较。结果表明,同伴反馈活动能显著提高学生对反馈的理解和判断能力。然而,其他方面并没有发现明显的变化。这些研究结果扩展了目前对 EFL 学生写作反馈素养的理解,具有重要的教学意义。
{"title":"Effects of peer feedback in English writing classes on EFL students’ writing feedback literacy","authors":"Fanrong Weng ,&nbsp;Cecilia Guanfang Zhao ,&nbsp;Shangwen Chen","doi":"10.1016/j.asw.2024.100874","DOIUrl":"10.1016/j.asw.2024.100874","url":null,"abstract":"<div><p>Despite the increasing scholarly attention towards students’ writing feedback literacy in recent years, empirical explorations of effective approaches to enhancing this capacity remain scarce. While peer feedback often plays an important role in English as a Foreign Language (EFL) writing development, few studies seem to have addressed the potential impacts of peer feedback activities on students’ overall writing feedback literacy. To fill this gap, a mixed-methods study was designed to investigate the effect of peer feedback activities on students’ writing feedback literacy development across such dimensions as appreciating feedback, making judgements, acknowledging different sources of feedback, managing affect, and taking actions with feedback. Two intact classes, one as the experimental group and the other control group, participated in the study. The experimental group engaged in peer feedback activities during the semester (12 weeks), whereas the control group received conventional teacher feedback only. The pre- and post-intervention results based on a writing feedback literacy scale were compared between the two groups, in addition to the analysis of interviews with the teacher and focal students from the experimental group, as well as students’ written assignments and revisions after receiving peer feedback. Results showed that peer feedback activities could significantly improve students’ appreciation of feedback and their ability to make judgements. Nevertheless, no significant changes in other dimensions were identified. These findings extend the current understanding of EFL students’ writing feedback literacy and hold valuable pedagogical implications.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100874"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141852395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices 沙特大学生英语写作反馈偏好与教师实践之间的匹配与不匹配
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-06-17 DOI: 10.1016/j.asw.2024.100863
Muhammad M.M. Abdel Latif , Zainab Alsuhaibani , Asma Alsahil

Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.

尽管许多研究都涉及到了 L2 写作课堂中的反馈实践,但很少有研究试图从广阔的角度来调查学习者和教师的反馈观点。本研究以写作课反馈的 8 维框架为基础,调查了沙特大学生的英语写作反馈偏好与教师反馈实践之间的潜在匹配与不匹配。通过学生问卷和教师问卷收集了定量和定性数据。这两项调查评估了学生对 26 种写作反馈模式、策略和活动的偏好以及教师对这些模式、策略和活动的使用情况。共有 11 所沙特大学的 575 名英语专业本科生填写了学生问卷,82 名写作指导教师填写了教师问卷。数据分析显示,学生的英语写作反馈偏好与教师的做法之间的差异在反馈维度上各不相同。研究普遍表明,学生写作反馈偏好与教师反馈实践之间的不匹配程度远远超过匹配程度。通过对两份问卷中一组开放式问题的回答所获得的定性数据,提供了有关学生和教师与反馈相关的信念和原因的信息。本文最后讨论了研究结果及其影响。
{"title":"Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices","authors":"Muhammad M.M. Abdel Latif ,&nbsp;Zainab Alsuhaibani ,&nbsp;Asma Alsahil","doi":"10.1016/j.asw.2024.100863","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100863","url":null,"abstract":"<div><p>Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100863"},"PeriodicalIF":3.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141423148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does “more complexity” equal “better writing”? Investigating the relationship between form-based complexity and meaning-based complexity in high school EFL learners’ argumentative writing 更复杂 "就等于 "更好的写作 "吗?探究高中英语学习者议论文写作中形式复杂性与意义复杂性之间的关系
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-06-13 DOI: 10.1016/j.asw.2024.100867
Sachiko Yasuda

The study examines the relationship between form-based complexity and meaning-based complexity in argumentative essays written by high school students learning English as a foreign language (EFL) in relation to writing quality. The data comprise argumentative essays written by 102 Japanese high school learners at different proficiency levels. The students’ proficiency levels were determined based on the evaluation of their argumentative essays by human raters using the GTEC rubric. The students’ essays were analyzed from multiple dimensions, focusing on both form-based complexity (lexical complexity, large-grained syntactic complexity, and fine-grained syntactic complexity features) and meaning-based complexity (argument quality). The results of the multidimensional analysis revealed that the most influential factor in determining overall essay scores was not form-based complexity but meaning-based complexity achieved through argument quality. Moreover, the results indicated that meaning-based complexity was strongly correlated with the use of complex nominals rather than clausal complexity. These insights have significant implications for both the teaching and assessment of argumentative essays among high school EFL learners, underscoring the importance of understanding what aspects of writing to prioritize and how best to assess student writing.

本研究探讨了将英语作为外语(EFL)学习的高中生所写的议论文中基于形式的复杂性和基于意义的复杂性与写作质量之间的关系。数据包括 102 名不同水平的日本高中生所写的议论文。学生的水平等级是根据人工评分员使用 GTEC 评分标准对他们的议论文进行的评价确定的。我们从多个维度对学生的文章进行了分析,重点关注基于形式的复杂性(词法复杂性、大粒度句法复杂性和细粒度句法复杂性特征)和基于意义的复杂性(论证质量)。多维分析的结果表明,决定作文总分的最大影响因素不是形式复杂性,而是通过论证质量实现的意义复杂性。此外,分析结果表明,意义复杂性与复杂名词的使用而非分句复杂性密切相关。这些见解对高中 EFL 学习者的议论文教学和评估具有重要意义,强调了了解写作的哪些方面应优先考虑以及如何最好地评估学生写作的重要性。
{"title":"Does “more complexity” equal “better writing”? Investigating the relationship between form-based complexity and meaning-based complexity in high school EFL learners’ argumentative writing","authors":"Sachiko Yasuda","doi":"10.1016/j.asw.2024.100867","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100867","url":null,"abstract":"<div><p>The study examines the relationship between form-based complexity and meaning-based complexity in argumentative essays written by high school students learning English as a foreign language (EFL) in relation to writing quality. The data comprise argumentative essays written by 102 Japanese high school learners at different proficiency levels. The students’ proficiency levels were determined based on the evaluation of their argumentative essays by human raters using the GTEC rubric. The students’ essays were analyzed from multiple dimensions, focusing on both form-based complexity (lexical complexity, large-grained syntactic complexity, and fine-grained syntactic complexity features) and meaning-based complexity (argument quality). The results of the multidimensional analysis revealed that the most influential factor in determining overall essay scores was not form-based complexity but meaning-based complexity achieved through argument quality. Moreover, the results indicated that meaning-based complexity was strongly correlated with the use of complex nominals rather than clausal complexity. These insights have significant implications for both the teaching and assessment of argumentative essays among high school EFL learners, underscoring the importance of understanding what aspects of writing to prioritize and how best to assess student writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100867"},"PeriodicalIF":3.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thirty years of writing assessment: A bibliometric analysis of research trends and future directions 写作评估三十年:对研究趋势和未来方向的文献计量分析
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-06-07 DOI: 10.1016/j.asw.2024.100862
Jihua Dong , Yanan Zhao , Louisa Buckingham

This study employs a bibliometric analysis to identify the research trends in the field of writing assessment over the last 30 years (1993–2022). Employing a dataset of 1,712 articles and 52,092 unique references, keyword co-occurrence analyses were used to identify prominent research topics, co-citation analyses were conducted to identify influential publications and journals, and a structural variation analysis was employed to identify transformative research in recent years. The results revealed the growing popularity of the writing assessment field, and the increasing diversity of research topics in the field. The research trends have become more associated with technology and cognitive and metacognitive processes. The influential publications indicate changes in research interest towards cross-disciplinary publications. The journals identified as key venues for writing assessment research also changed across the three decades. The latest transformative research points out possible future directions, including the integration of computational methods in writing assessment, and investigations into relationships between writing quality and various factors. This study contributes to our understanding of the development and future directions of writing assessment research, and has implications for researchers and practitioners.

本研究采用文献计量学分析方法来确定过去 30 年(1993-2022 年)写作评估领域的研究趋势。本研究采用了一个包含 1,712 篇文章和 52,092 条唯一参考文献的数据集,通过关键词共现分析来确定突出的研究课题,通过共引分析来确定有影响力的出版物和期刊,通过结构变异分析来确定近年来的变革性研究。研究结果表明,写作评估领域越来越受欢迎,该领域的研究课题也越来越多样化。研究趋势更多地与技术、认知和元认知过程相关联。有影响力的出版物表明,研究兴趣向跨学科出版物转变。在这三十年中,被确定为写作评估研究的主要阵地的期刊也发生了变化。最新的变革性研究指出了未来可能的方向,包括在写作评估中整合计算方法,以及调查写作质量与各种因素之间的关系。这项研究有助于我们了解写作评估研究的发展和未来方向,并对研究人员和从业人员具有借鉴意义。
{"title":"Thirty years of writing assessment: A bibliometric analysis of research trends and future directions","authors":"Jihua Dong ,&nbsp;Yanan Zhao ,&nbsp;Louisa Buckingham","doi":"10.1016/j.asw.2024.100862","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100862","url":null,"abstract":"<div><p>This study employs a bibliometric analysis to identify the research trends in the field of writing assessment over the last 30 years (1993–2022). Employing a dataset of 1,712 articles and 52,092 unique references, keyword co-occurrence analyses were used to identify prominent research topics, co-citation analyses were conducted to identify influential publications and journals, and a structural variation analysis was employed to identify transformative research in recent years. The results revealed the growing popularity of the writing assessment field, and the increasing diversity of research topics in the field. The research trends have become more associated with technology and cognitive and metacognitive processes. The influential publications indicate changes in research interest towards cross-disciplinary publications. The journals identified as key venues for writing assessment research also changed across the three decades. The latest transformative research points out possible future directions, including the integration of computational methods in writing assessment, and investigations into relationships between writing quality and various factors. This study contributes to our understanding of the development and future directions of writing assessment research, and has implications for researchers and practitioners.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100862"},"PeriodicalIF":3.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EvaluMate: Using AI to support students’ feedback provision in peer assessment for writing EvaluMate:使用人工智能支持学生在写作互评中提供反馈
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-31 DOI: 10.1016/j.asw.2024.100864
Kai Guo

Peer feedback plays an important role in promoting learning in the writing classroom. However, providing high-quality feedback can be demanding for student reviewers. To address this challenge, this article proposes an AI-enhanced approach to peer feedback provision. I introduce EvaluMate, a newly developed online peer review system that leverages ChatGPT, a large language model (LLM), to scaffold student reviewers’ feedback generation. I discuss the design and functionality of EvaluMate, highlighting its affordances in supporting student reviewers’ provision of comments on peers’ essays. I also address the system’s limitations and propose potential solutions. Furthermore, I recommend future research on students’ engagement with this learning approach and its impact on learning outcomes. By presenting EvaluMate, I aim to inspire researchers and practitioners to explore the potential of AI technology in the teaching, learning, and assessment of writing.

同行反馈在促进写作课堂学习方面发挥着重要作用。然而,提供高质量的反馈对学生审阅者来说要求很高。为了应对这一挑战,本文提出了一种人工智能增强型同伴反馈方法。我介绍了新开发的在线互评系统 EvaluMate,它利用大型语言模型(LLM) ChatGPT 来帮助学生审稿人生成反馈。我讨论了 EvaluMate 的设计和功能,强调了它在支持学生评阅者对同行论文提供评论方面的能力。我还讨论了该系统的局限性,并提出了潜在的解决方案。此外,我还建议今后就学生参与这种学习方法及其对学习成果的影响开展研究。通过介绍 EvaluMate,我希望激励研究人员和从业人员探索人工智能技术在写作教学、学习和评估方面的潜力。
{"title":"EvaluMate: Using AI to support students’ feedback provision in peer assessment for writing","authors":"Kai Guo","doi":"10.1016/j.asw.2024.100864","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100864","url":null,"abstract":"<div><p>Peer feedback plays an important role in promoting learning in the writing classroom. However, providing high-quality feedback can be demanding for student reviewers. To address this challenge, this article proposes an AI-enhanced approach to peer feedback provision. I introduce EvaluMate, a newly developed online peer review system that leverages ChatGPT, a large language model (LLM), to scaffold student reviewers’ feedback generation. I discuss the design and functionality of EvaluMate, highlighting its affordances in supporting student reviewers’ provision of comments on peers’ essays. I also address the system’s limitations and propose potential solutions. Furthermore, I recommend future research on students’ engagement with this learning approach and its impact on learning outcomes. By presenting EvaluMate, I aim to inspire researchers and practitioners to explore the potential of AI technology in the teaching, learning, and assessment of writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100864"},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Chinese L2 writing performance in paper-based and computer-based modes: Perspectives from the writing product and process 比较纸质模式和电脑模式下的中文第二语言写作成绩:从写作产品和过程的角度看问题
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-31 DOI: 10.1016/j.asw.2024.100849
Xiaozhu Wang, Jimin Wang

As writing is a complex language-producing process dependent on the writing environment and medium, the comparability of computer-based (CB) and paper-based (PB) writing assessments has been studied extensively since the emergence of computer-based language writing assessment. This study investigated the differences in the writing product and process between CB and PB modes of writing assessment in Chinese as a second language, of which the character writing system is considered challenging for learners. The many-facet Rasch model (MFRM) was adopted to reveal the text quality differences. Keystrokes and handwriting trace data were utilized to unveil insights into the writing process. The results showed that Chinese L2 learners generated higher-quality texts with fewer character mistakes in the CB mode. They revised much more, paused shorter and less frequently between lower-level linguistic units in the CB mode. The quality of CB text is associated with revision behavior, whereas pause duration serves as a stronger predictor of PB text quality. The findings suggest that the act of handwriting Chinese characters makes the construct of PB distinct from the CB writing assessment in L2 Chinese. Thus, the setting of the assessment mode should consider the target language use and the test taker’s characteristics.

由于写作是一个复杂的语言生成过程,取决于写作环境和写作媒介,因此自计算机语言写作评估出现以来,人们对计算机写作评估(CB)和纸质写作评估(PB)的可比性进行了广泛的研究。汉语作为第二语言,其汉字书写系统被认为对学习者具有挑战性,本研究调查了CB和PB写作测评模式在写作产品和过程方面的差异。研究采用多面拉施模型(MFRM)来揭示文本质量差异。按键和手写痕迹数据被用来揭示写作过程。结果表明,在 CB 模式下,中国的 L2 学习者生成的文本质量更高,错误更少。在 CB 模式下,他们修改的次数更多,停顿的时间更短,低级语言单位之间的频率更低。CB 文本的质量与修改行为相关,而停顿时间则更能预测 PB 文本的质量。研究结果表明,手写汉字的行为使 PB 构建有别于汉语第二语言中的 CB 书写评估。因此,评估模式的设置应考虑目标语言的使用和受测者的特点。
{"title":"Comparing Chinese L2 writing performance in paper-based and computer-based modes: Perspectives from the writing product and process","authors":"Xiaozhu Wang,&nbsp;Jimin Wang","doi":"10.1016/j.asw.2024.100849","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100849","url":null,"abstract":"<div><p>As writing is a complex language-producing process dependent on the writing environment and medium, the comparability of computer-based (CB) and paper-based (PB) writing assessments has been studied extensively since the emergence of computer-based language writing assessment. This study investigated the differences in the writing product and process between CB and PB modes of writing assessment in Chinese as a second language, of which the character writing system is considered challenging for learners. The many-facet Rasch model (MFRM) was adopted to reveal the text quality differences. Keystrokes and handwriting trace data were utilized to unveil insights into the writing process. The results showed that Chinese L2 learners generated higher-quality texts with fewer character mistakes in the CB mode. They revised much more, paused shorter and less frequently between lower-level linguistic units in the CB mode. The quality of CB text is associated with revision behavior, whereas pause duration serves as a stronger predictor of PB text quality. The findings suggest that the act of handwriting Chinese characters makes the construct of PB distinct from the CB writing assessment in L2 Chinese. Thus, the setting of the assessment mode should consider the target language use and the test taker’s characteristics.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100849"},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A teacher’s inquiry into diagnostic assessment in an EAP writing course 一位教师对 EAP 写作课程诊断评估的探究
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-30 DOI: 10.1016/j.asw.2024.100848
Rabail Qayyum

Research into diagnostic assessment of writing has largely ignored how diagnostic feedback information leads to differentiated instruction and learning. This case study research presents a teacher’s account of validating an in-house diagnostic assessment procedure in an English for Academic Purposes writing course with a view to refining it. I developed a validity argument and gathered and interpreted related evidence, focusing on one student’s performance in and perception of the assessment. The analysis revealed that to an extent the absence of proper feedback mechanisms limited the use of the test, somewhat weakened its impact, and reduced the potential for learning. I propose a modification to the assessment procedure involving a sample student feedback report.

对写作诊断性评估的研究在很大程度上忽视了诊断性反馈信息如何导致差异化教学和学习。本案例研究介绍了一位教师在学术英语写作课程中验证内部诊断评估程序的情况,以便对其进行改进。我提出了一个有效性论点,并收集和解释了相关证据,重点是一名学生在评估中的表现和对评估的看法。分析表明,缺乏适当的反馈机制在一定程度上限制了测试的使用,在一定程度上削弱了测试的影响,降低了学习的潜力。我建议对评估程序进行修改,涉及学生反馈报告样本。
{"title":"A teacher’s inquiry into diagnostic assessment in an EAP writing course","authors":"Rabail Qayyum","doi":"10.1016/j.asw.2024.100848","DOIUrl":"10.1016/j.asw.2024.100848","url":null,"abstract":"<div><p>Research into diagnostic assessment of writing has largely ignored how diagnostic feedback information leads to differentiated instruction and learning. This case study research presents a teacher’s account of validating an in-house diagnostic assessment procedure in an English for Academic Purposes writing course with a view to refining it. I developed a validity argument and gathered and interpreted related evidence, focusing on one student’s performance in and perception of the assessment. The analysis revealed that to an extent the absence of proper feedback mechanisms limited the use of the test, somewhat weakened its impact, and reduced the potential for learning. I propose a modification to the assessment procedure involving a sample student feedback report.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100848"},"PeriodicalIF":3.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test 综合写作任务的结构表征和预测有效性:对 Duolingo 英语测试写作部分的研究
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-28 DOI: 10.1016/j.asw.2024.100846
Qin Xie

This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of Duolingo English Test (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.

本研究探讨了两个 "从阅读到写作 "的综合任务是否能够拓宽Duolingo英语测试(DET)写作部分的建构表征。本研究还验证了这两项任务能否增强 DET 对大学英语学术写作的预测能力。这两项任务分别是:(1)根据两篇原文撰写摘要;(2)根据五篇原文撰写 "从阅读到写作 "的文章。这两项任务都是针对香港的本科生样本(N = 204)进行的。每位参与者还提交了一份为学科课程评估而撰写的学术作业。三位专业评分员根据详细的分析评分标准对所有写作样本进行双重评分。原始分数首先使用多方面拉施测量法(Multi-Faceted Rasch Measurement)进行处理,以估计评分者之间和评分者内部的一致性,并生成调整后的(公平的)测量结果。在这些测量结果的基础上,依次进行了描述性分析、连续多元回归和结构方程建模。这些分析验证了写作任务的基本组成结构,并评估了它们对综合写作总分的相对贡献。结果发现,这两项任务都有助于 DET 的建构表征,并为领域成绩增加了适度的预测力。本文讨论了这些研究结果及其实际意义,特别是关于建构表征与预测效度之间的复杂关系。
{"title":"Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test","authors":"Qin Xie","doi":"10.1016/j.asw.2024.100846","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100846","url":null,"abstract":"<div><p>This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of <em>Duolingo English Test</em> (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100846"},"PeriodicalIF":3.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000394/pdfft?md5=1959b9ed8a9acc732d6a5985fba62520&pid=1-s2.0-S1075293524000394-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Assessing Writing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1