首页 > 最新文献

Assessing Writing最新文献

英文 中文
Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices 沙特大学生英语写作反馈偏好与教师实践之间的匹配与不匹配
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-06-17 DOI: 10.1016/j.asw.2024.100863
Muhammad M.M. Abdel Latif , Zainab Alsuhaibani , Asma Alsahil

Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.

尽管许多研究都涉及到了 L2 写作课堂中的反馈实践,但很少有研究试图从广阔的角度来调查学习者和教师的反馈观点。本研究以写作课反馈的 8 维框架为基础,调查了沙特大学生的英语写作反馈偏好与教师反馈实践之间的潜在匹配与不匹配。通过学生问卷和教师问卷收集了定量和定性数据。这两项调查评估了学生对 26 种写作反馈模式、策略和活动的偏好以及教师对这些模式、策略和活动的使用情况。共有 11 所沙特大学的 575 名英语专业本科生填写了学生问卷,82 名写作指导教师填写了教师问卷。数据分析显示,学生的英语写作反馈偏好与教师的做法之间的差异在反馈维度上各不相同。研究普遍表明,学生写作反馈偏好与教师反馈实践之间的不匹配程度远远超过匹配程度。通过对两份问卷中一组开放式问题的回答所获得的定性数据,提供了有关学生和教师与反馈相关的信念和原因的信息。本文最后讨论了研究结果及其影响。
{"title":"Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices","authors":"Muhammad M.M. Abdel Latif ,&nbsp;Zainab Alsuhaibani ,&nbsp;Asma Alsahil","doi":"10.1016/j.asw.2024.100863","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100863","url":null,"abstract":"<div><p>Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100863"},"PeriodicalIF":3.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141423148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does “more complexity” equal “better writing”? Investigating the relationship between form-based complexity and meaning-based complexity in high school EFL learners’ argumentative writing 更复杂 "就等于 "更好的写作 "吗?探究高中英语学习者议论文写作中形式复杂性与意义复杂性之间的关系
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-06-13 DOI: 10.1016/j.asw.2024.100867
Sachiko Yasuda

The study examines the relationship between form-based complexity and meaning-based complexity in argumentative essays written by high school students learning English as a foreign language (EFL) in relation to writing quality. The data comprise argumentative essays written by 102 Japanese high school learners at different proficiency levels. The students’ proficiency levels were determined based on the evaluation of their argumentative essays by human raters using the GTEC rubric. The students’ essays were analyzed from multiple dimensions, focusing on both form-based complexity (lexical complexity, large-grained syntactic complexity, and fine-grained syntactic complexity features) and meaning-based complexity (argument quality). The results of the multidimensional analysis revealed that the most influential factor in determining overall essay scores was not form-based complexity but meaning-based complexity achieved through argument quality. Moreover, the results indicated that meaning-based complexity was strongly correlated with the use of complex nominals rather than clausal complexity. These insights have significant implications for both the teaching and assessment of argumentative essays among high school EFL learners, underscoring the importance of understanding what aspects of writing to prioritize and how best to assess student writing.

本研究探讨了将英语作为外语(EFL)学习的高中生所写的议论文中基于形式的复杂性和基于意义的复杂性与写作质量之间的关系。数据包括 102 名不同水平的日本高中生所写的议论文。学生的水平等级是根据人工评分员使用 GTEC 评分标准对他们的议论文进行的评价确定的。我们从多个维度对学生的文章进行了分析,重点关注基于形式的复杂性(词法复杂性、大粒度句法复杂性和细粒度句法复杂性特征)和基于意义的复杂性(论证质量)。多维分析的结果表明,决定作文总分的最大影响因素不是形式复杂性,而是通过论证质量实现的意义复杂性。此外,分析结果表明,意义复杂性与复杂名词的使用而非分句复杂性密切相关。这些见解对高中 EFL 学习者的议论文教学和评估具有重要意义,强调了了解写作的哪些方面应优先考虑以及如何最好地评估学生写作的重要性。
{"title":"Does “more complexity” equal “better writing”? Investigating the relationship between form-based complexity and meaning-based complexity in high school EFL learners’ argumentative writing","authors":"Sachiko Yasuda","doi":"10.1016/j.asw.2024.100867","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100867","url":null,"abstract":"<div><p>The study examines the relationship between form-based complexity and meaning-based complexity in argumentative essays written by high school students learning English as a foreign language (EFL) in relation to writing quality. The data comprise argumentative essays written by 102 Japanese high school learners at different proficiency levels. The students’ proficiency levels were determined based on the evaluation of their argumentative essays by human raters using the GTEC rubric. The students’ essays were analyzed from multiple dimensions, focusing on both form-based complexity (lexical complexity, large-grained syntactic complexity, and fine-grained syntactic complexity features) and meaning-based complexity (argument quality). The results of the multidimensional analysis revealed that the most influential factor in determining overall essay scores was not form-based complexity but meaning-based complexity achieved through argument quality. Moreover, the results indicated that meaning-based complexity was strongly correlated with the use of complex nominals rather than clausal complexity. These insights have significant implications for both the teaching and assessment of argumentative essays among high school EFL learners, underscoring the importance of understanding what aspects of writing to prioritize and how best to assess student writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100867"},"PeriodicalIF":3.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thirty years of writing assessment: A bibliometric analysis of research trends and future directions 写作评估三十年:对研究趋势和未来方向的文献计量分析
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-06-07 DOI: 10.1016/j.asw.2024.100862
Jihua Dong , Yanan Zhao , Louisa Buckingham

This study employs a bibliometric analysis to identify the research trends in the field of writing assessment over the last 30 years (1993–2022). Employing a dataset of 1,712 articles and 52,092 unique references, keyword co-occurrence analyses were used to identify prominent research topics, co-citation analyses were conducted to identify influential publications and journals, and a structural variation analysis was employed to identify transformative research in recent years. The results revealed the growing popularity of the writing assessment field, and the increasing diversity of research topics in the field. The research trends have become more associated with technology and cognitive and metacognitive processes. The influential publications indicate changes in research interest towards cross-disciplinary publications. The journals identified as key venues for writing assessment research also changed across the three decades. The latest transformative research points out possible future directions, including the integration of computational methods in writing assessment, and investigations into relationships between writing quality and various factors. This study contributes to our understanding of the development and future directions of writing assessment research, and has implications for researchers and practitioners.

本研究采用文献计量学分析方法来确定过去 30 年(1993-2022 年)写作评估领域的研究趋势。本研究采用了一个包含 1,712 篇文章和 52,092 条唯一参考文献的数据集,通过关键词共现分析来确定突出的研究课题,通过共引分析来确定有影响力的出版物和期刊,通过结构变异分析来确定近年来的变革性研究。研究结果表明,写作评估领域越来越受欢迎,该领域的研究课题也越来越多样化。研究趋势更多地与技术、认知和元认知过程相关联。有影响力的出版物表明,研究兴趣向跨学科出版物转变。在这三十年中,被确定为写作评估研究的主要阵地的期刊也发生了变化。最新的变革性研究指出了未来可能的方向,包括在写作评估中整合计算方法,以及调查写作质量与各种因素之间的关系。这项研究有助于我们了解写作评估研究的发展和未来方向,并对研究人员和从业人员具有借鉴意义。
{"title":"Thirty years of writing assessment: A bibliometric analysis of research trends and future directions","authors":"Jihua Dong ,&nbsp;Yanan Zhao ,&nbsp;Louisa Buckingham","doi":"10.1016/j.asw.2024.100862","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100862","url":null,"abstract":"<div><p>This study employs a bibliometric analysis to identify the research trends in the field of writing assessment over the last 30 years (1993–2022). Employing a dataset of 1,712 articles and 52,092 unique references, keyword co-occurrence analyses were used to identify prominent research topics, co-citation analyses were conducted to identify influential publications and journals, and a structural variation analysis was employed to identify transformative research in recent years. The results revealed the growing popularity of the writing assessment field, and the increasing diversity of research topics in the field. The research trends have become more associated with technology and cognitive and metacognitive processes. The influential publications indicate changes in research interest towards cross-disciplinary publications. The journals identified as key venues for writing assessment research also changed across the three decades. The latest transformative research points out possible future directions, including the integration of computational methods in writing assessment, and investigations into relationships between writing quality and various factors. This study contributes to our understanding of the development and future directions of writing assessment research, and has implications for researchers and practitioners.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100862"},"PeriodicalIF":3.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EvaluMate: Using AI to support students’ feedback provision in peer assessment for writing EvaluMate:使用人工智能支持学生在写作互评中提供反馈
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-31 DOI: 10.1016/j.asw.2024.100864
Kai Guo

Peer feedback plays an important role in promoting learning in the writing classroom. However, providing high-quality feedback can be demanding for student reviewers. To address this challenge, this article proposes an AI-enhanced approach to peer feedback provision. I introduce EvaluMate, a newly developed online peer review system that leverages ChatGPT, a large language model (LLM), to scaffold student reviewers’ feedback generation. I discuss the design and functionality of EvaluMate, highlighting its affordances in supporting student reviewers’ provision of comments on peers’ essays. I also address the system’s limitations and propose potential solutions. Furthermore, I recommend future research on students’ engagement with this learning approach and its impact on learning outcomes. By presenting EvaluMate, I aim to inspire researchers and practitioners to explore the potential of AI technology in the teaching, learning, and assessment of writing.

同行反馈在促进写作课堂学习方面发挥着重要作用。然而,提供高质量的反馈对学生审阅者来说要求很高。为了应对这一挑战,本文提出了一种人工智能增强型同伴反馈方法。我介绍了新开发的在线互评系统 EvaluMate,它利用大型语言模型(LLM) ChatGPT 来帮助学生审稿人生成反馈。我讨论了 EvaluMate 的设计和功能,强调了它在支持学生评阅者对同行论文提供评论方面的能力。我还讨论了该系统的局限性,并提出了潜在的解决方案。此外,我还建议今后就学生参与这种学习方法及其对学习成果的影响开展研究。通过介绍 EvaluMate,我希望激励研究人员和从业人员探索人工智能技术在写作教学、学习和评估方面的潜力。
{"title":"EvaluMate: Using AI to support students’ feedback provision in peer assessment for writing","authors":"Kai Guo","doi":"10.1016/j.asw.2024.100864","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100864","url":null,"abstract":"<div><p>Peer feedback plays an important role in promoting learning in the writing classroom. However, providing high-quality feedback can be demanding for student reviewers. To address this challenge, this article proposes an AI-enhanced approach to peer feedback provision. I introduce EvaluMate, a newly developed online peer review system that leverages ChatGPT, a large language model (LLM), to scaffold student reviewers’ feedback generation. I discuss the design and functionality of EvaluMate, highlighting its affordances in supporting student reviewers’ provision of comments on peers’ essays. I also address the system’s limitations and propose potential solutions. Furthermore, I recommend future research on students’ engagement with this learning approach and its impact on learning outcomes. By presenting EvaluMate, I aim to inspire researchers and practitioners to explore the potential of AI technology in the teaching, learning, and assessment of writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100864"},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Chinese L2 writing performance in paper-based and computer-based modes: Perspectives from the writing product and process 比较纸质模式和电脑模式下的中文第二语言写作成绩:从写作产品和过程的角度看问题
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-31 DOI: 10.1016/j.asw.2024.100849
Xiaozhu Wang, Jimin Wang

As writing is a complex language-producing process dependent on the writing environment and medium, the comparability of computer-based (CB) and paper-based (PB) writing assessments has been studied extensively since the emergence of computer-based language writing assessment. This study investigated the differences in the writing product and process between CB and PB modes of writing assessment in Chinese as a second language, of which the character writing system is considered challenging for learners. The many-facet Rasch model (MFRM) was adopted to reveal the text quality differences. Keystrokes and handwriting trace data were utilized to unveil insights into the writing process. The results showed that Chinese L2 learners generated higher-quality texts with fewer character mistakes in the CB mode. They revised much more, paused shorter and less frequently between lower-level linguistic units in the CB mode. The quality of CB text is associated with revision behavior, whereas pause duration serves as a stronger predictor of PB text quality. The findings suggest that the act of handwriting Chinese characters makes the construct of PB distinct from the CB writing assessment in L2 Chinese. Thus, the setting of the assessment mode should consider the target language use and the test taker’s characteristics.

由于写作是一个复杂的语言生成过程,取决于写作环境和写作媒介,因此自计算机语言写作评估出现以来,人们对计算机写作评估(CB)和纸质写作评估(PB)的可比性进行了广泛的研究。汉语作为第二语言,其汉字书写系统被认为对学习者具有挑战性,本研究调查了CB和PB写作测评模式在写作产品和过程方面的差异。研究采用多面拉施模型(MFRM)来揭示文本质量差异。按键和手写痕迹数据被用来揭示写作过程。结果表明,在 CB 模式下,中国的 L2 学习者生成的文本质量更高,错误更少。在 CB 模式下,他们修改的次数更多,停顿的时间更短,低级语言单位之间的频率更低。CB 文本的质量与修改行为相关,而停顿时间则更能预测 PB 文本的质量。研究结果表明,手写汉字的行为使 PB 构建有别于汉语第二语言中的 CB 书写评估。因此,评估模式的设置应考虑目标语言的使用和受测者的特点。
{"title":"Comparing Chinese L2 writing performance in paper-based and computer-based modes: Perspectives from the writing product and process","authors":"Xiaozhu Wang,&nbsp;Jimin Wang","doi":"10.1016/j.asw.2024.100849","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100849","url":null,"abstract":"<div><p>As writing is a complex language-producing process dependent on the writing environment and medium, the comparability of computer-based (CB) and paper-based (PB) writing assessments has been studied extensively since the emergence of computer-based language writing assessment. This study investigated the differences in the writing product and process between CB and PB modes of writing assessment in Chinese as a second language, of which the character writing system is considered challenging for learners. The many-facet Rasch model (MFRM) was adopted to reveal the text quality differences. Keystrokes and handwriting trace data were utilized to unveil insights into the writing process. The results showed that Chinese L2 learners generated higher-quality texts with fewer character mistakes in the CB mode. They revised much more, paused shorter and less frequently between lower-level linguistic units in the CB mode. The quality of CB text is associated with revision behavior, whereas pause duration serves as a stronger predictor of PB text quality. The findings suggest that the act of handwriting Chinese characters makes the construct of PB distinct from the CB writing assessment in L2 Chinese. Thus, the setting of the assessment mode should consider the target language use and the test taker’s characteristics.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100849"},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A teacher’s inquiry into diagnostic assessment in an EAP writing course 一位教师对 EAP 写作课程诊断评估的探究
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-30 DOI: 10.1016/j.asw.2024.100848
Rabail Qayyum

Research into diagnostic assessment of writing has largely ignored how diagnostic feedback information leads to differentiated instruction and learning. This case study research presents a teacher’s account of validating an in-house diagnostic assessment procedure in an English for Academic Purposes writing course with a view to refining it. I developed a validity argument and gathered and interpreted related evidence, focusing on one student’s performance in and perception of the assessment. The analysis revealed that to an extent the absence of proper feedback mechanisms limited the use of the test, somewhat weakened its impact, and reduced the potential for learning. I propose a modification to the assessment procedure involving a sample student feedback report.

对写作诊断性评估的研究在很大程度上忽视了诊断性反馈信息如何导致差异化教学和学习。本案例研究介绍了一位教师在学术英语写作课程中验证内部诊断评估程序的情况,以便对其进行改进。我提出了一个有效性论点,并收集和解释了相关证据,重点是一名学生在评估中的表现和对评估的看法。分析表明,缺乏适当的反馈机制在一定程度上限制了测试的使用,在一定程度上削弱了测试的影响,降低了学习的潜力。我建议对评估程序进行修改,涉及学生反馈报告样本。
{"title":"A teacher’s inquiry into diagnostic assessment in an EAP writing course","authors":"Rabail Qayyum","doi":"10.1016/j.asw.2024.100848","DOIUrl":"10.1016/j.asw.2024.100848","url":null,"abstract":"<div><p>Research into diagnostic assessment of writing has largely ignored how diagnostic feedback information leads to differentiated instruction and learning. This case study research presents a teacher’s account of validating an in-house diagnostic assessment procedure in an English for Academic Purposes writing course with a view to refining it. I developed a validity argument and gathered and interpreted related evidence, focusing on one student’s performance in and perception of the assessment. The analysis revealed that to an extent the absence of proper feedback mechanisms limited the use of the test, somewhat weakened its impact, and reduced the potential for learning. I propose a modification to the assessment procedure involving a sample student feedback report.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100848"},"PeriodicalIF":3.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test 综合写作任务的结构表征和预测有效性:对 Duolingo 英语测试写作部分的研究
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-28 DOI: 10.1016/j.asw.2024.100846
Qin Xie

This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of Duolingo English Test (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.

本研究探讨了两个 "从阅读到写作 "的综合任务是否能够拓宽Duolingo英语测试(DET)写作部分的建构表征。本研究还验证了这两项任务能否增强 DET 对大学英语学术写作的预测能力。这两项任务分别是:(1)根据两篇原文撰写摘要;(2)根据五篇原文撰写 "从阅读到写作 "的文章。这两项任务都是针对香港的本科生样本(N = 204)进行的。每位参与者还提交了一份为学科课程评估而撰写的学术作业。三位专业评分员根据详细的分析评分标准对所有写作样本进行双重评分。原始分数首先使用多方面拉施测量法(Multi-Faceted Rasch Measurement)进行处理,以估计评分者之间和评分者内部的一致性,并生成调整后的(公平的)测量结果。在这些测量结果的基础上,依次进行了描述性分析、连续多元回归和结构方程建模。这些分析验证了写作任务的基本组成结构,并评估了它们对综合写作总分的相对贡献。结果发现,这两项任务都有助于 DET 的建构表征,并为领域成绩增加了适度的预测力。本文讨论了这些研究结果及其实际意义,特别是关于建构表征与预测效度之间的复杂关系。
{"title":"Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test","authors":"Qin Xie","doi":"10.1016/j.asw.2024.100846","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100846","url":null,"abstract":"<div><p>This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of <em>Duolingo English Test</em> (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100846"},"PeriodicalIF":3.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000394/pdfft?md5=1959b9ed8a9acc732d6a5985fba62520&pid=1-s2.0-S1075293524000394-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How syntactic complexity indices predict Chinese L2 writing quality: An analysis of unified dependency syntactically-annotated corpus 句法复杂性指数如何预测中文 L2 写作质量?统一依存句法注释语料分析
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-05-16 DOI: 10.1016/j.asw.2024.100847
Yuxin Hao , Xuelin Wang , Shuai Bin , Qihao Yang , Haitao Liu

Previous syntactic complexity (SC) research on L2 Chinese has overlooked a range of Chinese-specific structures and fine-grained indices. This study, utilizing a syntactically annotated Chinese L2 writing corpus, simultaneously employs both large-grained and fine-grained syntactic complexity indices to investigate the relationship between syntactic complexity and writing quality produced by English-speaking Chinese second language (ECSL) learners from macro and micro perspectives. Our findings reveal the following: (a) at a large-grained level of analysis using syntactic complexity indices, the generic syntactic complexity indice (GSC indice) number of T-units per sentence and the Chinese-specific syntactic complexity indice (CSC indice) number of Clauses per topic chain unit account for 14.5% of the total variance in writing scores among ECSL learners; (b) the syntactic diversity model alone accounts for 24.7% of the variance in Chinese writing scores among ECSL learners; (c) the stepwise regression analysis model, which integrates fine-grained SC indices extracted from the syntactically annotated corpus, explains 43.7% of the variance in Chinese writing quality. This model incorporates CSC indices such as average ratio of dependency types per 30 dependency segments, the ratio of adjuncts to sentence end, the ratio of predicate complements, the ratio of numeral adjuncts, the mean length of Topic-Comment-Unit dependency distance, as well as GSC indices like the ratio of main governors, the ratio of attributers, the ratio of coordinating adjuncts, and the ratio of sentential objects. These findings highlight the valuable insights that syntactically annotated fine-grained SC indices offer regarding the writing characteristics of ECSL learners.

以往的汉语第二语言句法复杂度(SC)研究忽视了一系列中国特有的结构和细粒度指数。本研究利用语法注释的汉语第二语言写作语料库,同时使用大粒度和细粒度的句法复杂度指数,从宏观和微观两个角度研究句法复杂度与英语汉语第二语言(ECSL)学习者写作质量之间的关系。我们的研究结果如下(a) 在使用句法复杂度指数进行大粒度分析时,通用句法复杂度指数(GSC indice)每个句子的 T-units 数量和汉语特有句法复杂度指数(CSC indice)每个主题链单元的 Clauses 数量占写作分数总差异的 14.(b) 句法多样性模型单独解释了 24.7% 的 ECSL 学习者中文写作分数差异;(c) 逐步回归分析模型整合了从句法注释语料库中提取的细粒度 SC 指数,解释了 43.7% 的中文写作质量差异。该模型纳入了CSC指数,如每30个依存段中依存类型的平均比例、句末附属词的比例、谓语补语的比例、数词附属词的比例、主题-内容-单位依存距离的平均长度,以及GSC指数,如主治词的比例、归属词的比例、协调附属词的比例和句子宾语的比例。这些研究结果凸显了语法注释的细粒度SC指数为了解ECSL学习者的写作特点所提供的宝贵见解。
{"title":"How syntactic complexity indices predict Chinese L2 writing quality: An analysis of unified dependency syntactically-annotated corpus","authors":"Yuxin Hao ,&nbsp;Xuelin Wang ,&nbsp;Shuai Bin ,&nbsp;Qihao Yang ,&nbsp;Haitao Liu","doi":"10.1016/j.asw.2024.100847","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100847","url":null,"abstract":"<div><p>Previous syntactic complexity (SC) research on L2 Chinese has overlooked a range of Chinese-specific structures and fine-grained indices. This study, utilizing a syntactically annotated Chinese L2 writing corpus, simultaneously employs both large-grained and fine-grained syntactic complexity indices to investigate the relationship between syntactic complexity and writing quality produced by English-speaking Chinese second language (ECSL) learners from macro and micro perspectives. Our findings reveal the following: (a) at a large-grained level of analysis using syntactic complexity indices, the generic syntactic complexity indice (GSC indice) number of T-units per sentence and the Chinese-specific syntactic complexity indice (CSC indice) number of Clauses per topic chain unit account for 14.5% of the total variance in writing scores among ECSL learners; (b) the syntactic diversity model alone accounts for 24.7% of the variance in Chinese writing scores among ECSL learners; (c) the stepwise regression analysis model, which integrates fine-grained SC indices extracted from the syntactically annotated corpus, explains 43.7% of the variance in Chinese writing quality. This model incorporates CSC indices such as average ratio of dependency types per 30 dependency segments, the ratio of adjuncts to sentence end, the ratio of predicate complements, the ratio of numeral adjuncts, the mean length of Topic-Comment-Unit dependency distance, as well as GSC indices like the ratio of main governors, the ratio of attributers, the ratio of coordinating adjuncts, and the ratio of sentential objects. These findings highlight the valuable insights that syntactically annotated fine-grained SC indices offer regarding the writing characteristics of ECSL learners.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100847"},"PeriodicalIF":3.9,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140952436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing formative feedback in statistics writing: An exploratory study of student motivation using DocuScope Write & Audit 统计写作中的可视化形成性反馈:使用 DocuScope Write & Audit 对学生写作动机进行探索性研究
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-04-01 DOI: 10.1016/j.asw.2024.100830
Michael Laudenbach, David West Brown, Zhiyu Guo, Suguru Ishizaki, Alex Reinhart, Gordon Weinberg

Recently, formative feedback in writing instruction has been supported by technologies generally referred to as Automated Writing Evaluation tools. However, such tools are limited in their capacity to explore specific disciplinary genres, and they have shown mixed results in student writing improvement. We explore how technology-enhanced writing interventions can positively affect student attitudes toward and beliefs about writing, both reinforcing content knowledge and increasing student motivation. Using a student-facing text-visualization tool called Write & Audit, we hosted revision workshops for students (n = 30) in an introductory-level statistics course at a large North American University. The tool is designed to be flexible: instructors of various courses can create expectations and predefine topics that are genre-specific. In this way, students are offered non-evaluative formative feedback which redirects them to field-specific strategies. To gauge the usefulness of Write & Audit, we used a previously validated survey instrument designed to measure the construct model of student motivation (Ling et al. 2021). Our results show significant increases in student self-efficacy and beliefs about the importance of content in successful writing. We contextualize these findings with data from three student think-aloud interviews, which demonstrate metacognitive awareness while using the tool. Ultimately, this exploratory study is non-experimental, but it contributes a novel approach to automated formative feedback and confirms the promising potential of Write & Audit.

最近,写作教学中的形成性反馈得到了一般称为 "自动写作评价工具 "的技术的支持。然而,这些工具在探索特定学科体裁方面的能力有限,而且在提高学生写作水平方面的效果也参差不齐。我们探讨了技术强化的写作干预如何对学生的写作态度和写作信念产生积极影响,既强化了内容知识,又提高了学生的写作积极性。我们使用一个名为 "Write & Audit "的面向学生的文本可视化工具,为北美一所大型大学统计学入门课程的学生(n = 30)举办了修改研讨会。该工具的设计非常灵活:不同课程的讲师可以创建期望值,并预先确定特定体裁的主题。通过这种方式,学生可以获得非评价性的形成性反馈,从而转向特定领域的策略。为了衡量 Write & Audit 的实用性,我们使用了之前经过验证的调查工具,该工具旨在测量学生动机的建构模型(Ling 等人,2021 年)。我们的结果表明,学生的自我效能感和对写作内容的重要性的信念有了明显提高。我们将这些发现与来自三个学生思考-朗读访谈的数据相结合,这些数据显示了学生在使用该工具时的元认知意识。最终,这项探索性研究虽然不是实验性的,但它为自动形成性反馈提供了一种新方法,并证实了 Write & Audit 的巨大潜力。
{"title":"Visualizing formative feedback in statistics writing: An exploratory study of student motivation using DocuScope Write & Audit","authors":"Michael Laudenbach,&nbsp;David West Brown,&nbsp;Zhiyu Guo,&nbsp;Suguru Ishizaki,&nbsp;Alex Reinhart,&nbsp;Gordon Weinberg","doi":"10.1016/j.asw.2024.100830","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100830","url":null,"abstract":"<div><p>Recently, formative feedback in writing instruction has been supported by technologies generally referred to as Automated Writing Evaluation tools. However, such tools are limited in their capacity to explore specific disciplinary genres, and they have shown mixed results in student writing improvement. We explore how technology-enhanced writing interventions can positively affect student attitudes toward and beliefs about writing, both reinforcing content knowledge and increasing student motivation. Using a student-facing text-visualization tool called <em>Write &amp; Audit</em>, we hosted revision workshops for students (n = 30) in an introductory-level statistics course at a large North American University. The tool is designed to be flexible: instructors of various courses can create expectations and predefine topics that are genre-specific. In this way, students are offered non-evaluative formative feedback which redirects them to field-specific strategies. To gauge the usefulness of Write &amp; Audit, we used a previously validated survey instrument designed to measure the construct model of student motivation (Ling et al. 2021). Our results show significant increases in student self-efficacy and beliefs about the importance of content in successful writing. We contextualize these findings with data from three student think-aloud interviews, which demonstrate metacognitive awareness while using the tool. Ultimately, this exploratory study is non-experimental, but it contributes a novel approach to automated formative feedback and confirms the promising potential of Write &amp; Audit.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100830"},"PeriodicalIF":3.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000230/pdfft?md5=7f031636dffbbdcdb70229b30498cf92&pid=1-s2.0-S1075293524000230-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140330991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Engagement with supervisory feedback on master’s theses: Do supervisors and students see eye to eye? 参与导师对硕士论文的反馈:导师和学生意见一致吗?
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-04-01 DOI: 10.1016/j.asw.2024.100841
Madhu Neupane Bastola , Guangwei Hu

Student engagement has attracted much research attention in higher education because of various potential benefits associated with improved engagement. Despite extensive research on student engagement in higher education, little has been written about graduate students’ engagement with supervisory feedback. This paper reports on a study on student engagement with supervisory feedback on master’s theses conducted in the context of Nepalese higher education. The study employed an exploratory sequential mixed-methods design that drew on interviews and a questionnaire-based survey involving supervisors and students from four disciplines at a comprehensive university in Nepal. Analyses of the qualitative and quantitative data revealed significant differences between supervisors’ and students’ perceptions of all types (i.e., affective, cognitive, and behavioral) of student engagement. Significant disciplinary variations were also observed in supervisors’ and students’ perceptions of negative affect, cognitive engagement, and behavioral engagement. Furthermore, disciplinary background and feedback role interacted to shape perceptions of student engagement. These findings have implications for improving student engagement with supervisory feedback.

在高等教育领域,学生参与度的研究备受关注,因为学生参与度的提高会带来各种潜在的益处。尽管有关高等教育中学生参与度的研究非常广泛,但有关研究生参与导师反馈的研究却很少。本文报告了在尼泊尔高等教育背景下开展的一项关于学生参与导师对硕士论文反馈的研究。该研究采用了一种探索性顺序混合方法设计,通过访谈和问卷调查的方式,对尼泊尔一所综合性大学四个学科的导师和学生进行了调查。对定性和定量数据的分析表明,督导和学生对所有类型的学生参与(即情感、认知和行为)的看法存在显著差异。在督导和学生对负面情绪、认知参与和行为参与的看法上,也发现了明显的学科差异。此外,学科背景和反馈角色相互作用,影响了对学生参与度的看法。这些发现对提高学生对督导反馈的参与度具有重要意义。
{"title":"Engagement with supervisory feedback on master’s theses: Do supervisors and students see eye to eye?","authors":"Madhu Neupane Bastola ,&nbsp;Guangwei Hu","doi":"10.1016/j.asw.2024.100841","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100841","url":null,"abstract":"<div><p>Student engagement has attracted much research attention in higher education because of various potential benefits associated with improved engagement. Despite extensive research on student engagement in higher education, little has been written about graduate students’ engagement with supervisory feedback. This paper reports on a study on student engagement with supervisory feedback on master’s theses conducted in the context of Nepalese higher education. The study employed an exploratory sequential mixed-methods design that drew on interviews and a questionnaire-based survey involving supervisors and students from four disciplines at a comprehensive university in Nepal. Analyses of the qualitative and quantitative data revealed significant differences between supervisors’ and students’ perceptions of all types (i.e., affective, cognitive, and behavioral) of student engagement. Significant disciplinary variations were also observed in supervisors’ and students’ perceptions of negative affect, cognitive engagement, and behavioral engagement. Furthermore, disciplinary background and feedback role interacted to shape perceptions of student engagement. These findings have implications for improving student engagement with supervisory feedback.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100841"},"PeriodicalIF":3.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140644684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Assessing Writing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1