首页 > 最新文献

Assessing Writing最新文献

英文 中文
Examining the predictive power of L2 writing anxiety on L2 writing performance in simple and complex tasks under task-readiness conditions
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2024.100912
Mahmoud Abdi Tabari , Mahsa Farahanynia , Elouise Botes
Task-based research has often overlooked individual differences (IDs) and task-readiness factors in developing instructional materials and curricula. This study addresses these gaps by examining how L2 writing anxiety influences the Complexity, Accuracy, Lexis, and Fluency (CALF) of writing performance across tasks with varying cognitive demands under two task-readiness conditions: task repetition and task rehearsal. Ninety undergraduate ESL students completed a questionnaire on L2 writing anxiety before performing two argumentative tasks of differing cognitive complexity, administered one week apart in a counterbalanced design. After completing the first set of tasks, participants filled out a perception questionnaire to validate the task complexity manipulation. They then repeated the same tasks within the same timeframe. The findings revealed that while anxiety positively affected syntactic complexity, it negatively impacted accuracy overall. Under task repetition (implicit preparation), anxiety reduced both syntactic complexity and accuracy. In contrast, under task rehearsal (conscious preparation), anxiety had a positive effect on lexical complexity. Specifically, in the second performance, anxiety improved both accuracy and lexical complexity under task rehearsal and enhanced fluency and lexical complexity under task repetition. However, under task rehearsal, anxiety reduced syntactic complexity for both simple and complex tasks. Under task repetition, anxiety deteriorated lexical complexity, but only when the complex task was performed. Furthermore, task repetition outperformed task rehearsal in six out of eight measures: MLTU, DC/T, CN/T, EFC/C, Vocd, and WRDFRQmc. The cognitively complex task also produced better outcomes than the simple task across these six measures, as well as WMP. Performance improved on the second attempt across all measures and WMP.
{"title":"Examining the predictive power of L2 writing anxiety on L2 writing performance in simple and complex tasks under task-readiness conditions","authors":"Mahmoud Abdi Tabari ,&nbsp;Mahsa Farahanynia ,&nbsp;Elouise Botes","doi":"10.1016/j.asw.2024.100912","DOIUrl":"10.1016/j.asw.2024.100912","url":null,"abstract":"<div><div>Task-based research has often overlooked individual differences (IDs) and task-readiness factors in developing instructional materials and curricula. This study addresses these gaps by examining how L2 writing anxiety influences the Complexity, Accuracy, Lexis, and Fluency (CALF) of writing performance across tasks with varying cognitive demands under two task-readiness conditions: task repetition and task rehearsal. Ninety undergraduate ESL students completed a questionnaire on L2 writing anxiety before performing two argumentative tasks of differing cognitive complexity, administered one week apart in a counterbalanced design. After completing the first set of tasks, participants filled out a perception questionnaire to validate the task complexity manipulation. They then repeated the same tasks within the same timeframe. The findings revealed that while anxiety positively affected syntactic complexity, it negatively impacted accuracy overall. Under task repetition (implicit preparation), anxiety reduced both syntactic complexity and accuracy. In contrast, under task rehearsal (conscious preparation), anxiety had a positive effect on lexical complexity. Specifically, in the second performance, anxiety improved both accuracy and lexical complexity under task rehearsal and enhanced fluency and lexical complexity under task repetition. However, under task rehearsal, anxiety reduced syntactic complexity for both simple and complex tasks. Under task repetition, anxiety deteriorated lexical complexity, but only when the complex task was performed. Furthermore, task repetition outperformed task rehearsal in six out of eight measures: MLTU, DC/T, CN/T, EFC/C, Vocd, and WRDFRQmc. The cognitively complex task also produced better outcomes than the simple task across these six measures, as well as WMP. Performance improved on the second attempt across all measures and WMP.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100912"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Connecting L2 reading emotions and writing performance through imaginative capacity in the story continuation writing task: A gender difference perspective
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2025.100914
Jianling Zhan , Ying Xu
Second-language (L2) integrated writing tasks, like the story continuation writing task (SCWT), evaluate students’ reading and writing abilities. Although the relationship between writing emotions and performance has been established, the influence of reading emotions in L2 integrated writing remains understudied. The SCWT, newly incorporated into China’s college entrance exam (Gaokao), is designed to evoke emotions and stimulate imagination. This study examined gender-related differences in the relationship between reading emotions and SCWT performance, considering the mediating role of imaginative capacity. It involved 679 Chinese high school students, comprising 279 male and 400 female students, who participated in the SCWT and completed a questionnaire assessing their reading emotions (enjoyment, anxiety, curiosity) and imaginative capacity (creative and reproductive). Results indicated that female students scored significantly higher on reading enjoyment, curiosity, and writing performance than male students. Multi-group structural equation modeling analysis revealed that reading enjoyment predicted reading curiosity for both genders, and reading curiosity further predicted both types of imaginative capacity. However, the analysis revealed that among female students, writing performance was significantly enhanced by the synergistic effects of reading enjoyment, curiosity, and reproductive imagination. Pedagogical implications for promoting test fairness between gender groups and enhancing reading processes within the SCWT were discussed.
{"title":"Connecting L2 reading emotions and writing performance through imaginative capacity in the story continuation writing task: A gender difference perspective","authors":"Jianling Zhan ,&nbsp;Ying Xu","doi":"10.1016/j.asw.2025.100914","DOIUrl":"10.1016/j.asw.2025.100914","url":null,"abstract":"<div><div>Second-language (L2) integrated writing tasks, like the story continuation writing task (SCWT), evaluate students’ reading and writing abilities. Although the relationship between writing emotions and performance has been established, the influence of reading emotions in L2 integrated writing remains understudied. The SCWT, newly incorporated into China’s college entrance exam (Gaokao), is designed to evoke emotions and stimulate imagination. This study examined gender-related differences in the relationship between reading emotions and SCWT performance, considering the mediating role of imaginative capacity. It involved 679 Chinese high school students, comprising 279 male and 400 female students, who participated in the SCWT and completed a questionnaire assessing their reading emotions (enjoyment, anxiety, curiosity) and imaginative capacity (creative and reproductive). Results indicated that female students scored significantly higher on reading enjoyment, curiosity, and writing performance than male students. Multi-group structural equation modeling analysis revealed that reading enjoyment predicted reading curiosity for both genders, and reading curiosity further predicted both types of imaginative capacity. However, the analysis revealed that among female students, writing performance was significantly enhanced by the synergistic effects of reading enjoyment, curiosity, and reproductive imagination. Pedagogical implications for promoting test fairness between gender groups and enhancing reading processes within the SCWT were discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100914"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining EFL learners’ quantity and quality of uptake of teacher corrective feedback on writing across three different editing settings
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2024.100911
Saleh Mosleh Alharthi
Despite the role of dialogue in feedback uptake, no study has examined students’ uptake in different dialogue-based settings. Therefore, this study on 20 EFL Saudi students examined their uptake of feedback in self-dialogue-based, learner-learner dialogue-based, and teacher-learner dialogue-based editing settings. Analysis of teacher corrective feedback and students’ first and revised drafts of essays revealed that the rates of uptake quantity (92.3 %, 97.5 % & 95.4 %) and uptake quality (71.3 %, 80.5 % & 93.4 %) varied across the three settings, respectively. Moreover, while students integrated more global feedback in the teacher-learner dialogue (38.8 %) and learner-learner dialogue-based editing settings (38.8 %), they integrated more local feedback (69.1 %) in the self-dialogue-based editing setting. A post-hoc analysis showed significant differences in the uptake quantity in favor of learner-learner dialogue-based and teacher-learner dialogue-based editing settings and in the uptake quality in favor of the teacher-learner dialogue-based editing setting. Moreover, learner-learner and teacher-learner dialogue-based editing settings led to higher global feedback quality than self-dialogue-based setting. Students’ local feedback uptake differed significantly for the self-dialogue-based and teacher-learner dialogue-based editing settings. Despite the perceived learning benefits of feedback dialogues, students were challenged by initial apprehensions, feedback nature and technology use in feedback dialogues. The study offers useful implications for teachers and researchers.
{"title":"Examining EFL learners’ quantity and quality of uptake of teacher corrective feedback on writing across three different editing settings","authors":"Saleh Mosleh Alharthi","doi":"10.1016/j.asw.2024.100911","DOIUrl":"10.1016/j.asw.2024.100911","url":null,"abstract":"<div><div>Despite the role of dialogue in feedback uptake, no study has examined students’ uptake in different dialogue-based settings. Therefore, this study on 20 EFL Saudi students examined their uptake of feedback in self-dialogue-based, learner-learner dialogue-based, and teacher-learner dialogue-based editing settings. Analysis of teacher corrective feedback and students’ first and revised drafts of essays revealed that the rates of uptake quantity (92.3 %, 97.5 % &amp; 95.4 %) and uptake quality (71.3 %, 80.5 % &amp; 93.4 %) varied across the three settings, respectively. Moreover, while students integrated more global feedback in the teacher-learner dialogue (38.8 %) and learner-learner dialogue-based editing settings (38.8 %), they integrated more local feedback (69.1 %) in the self-dialogue-based editing setting. A post-hoc analysis showed significant differences in the uptake quantity in favor of learner-learner dialogue-based and teacher-learner dialogue-based editing settings and in the uptake quality in favor of the teacher-learner dialogue-based editing setting. Moreover, learner-learner and teacher-learner dialogue-based editing settings led to higher global feedback quality than self-dialogue-based setting. Students’ local feedback uptake differed significantly for the self-dialogue-based and teacher-learner dialogue-based editing settings. Despite the perceived learning benefits of feedback dialogues, students were challenged by initial apprehensions, feedback nature and technology use in feedback dialogues. The study offers useful implications for teachers and researchers.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100911"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the use of academic vocabulary in first-year ESL undergraduates’ writing: A corpus-driven study in Hong Kong
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2024.100913
Edsoulla Chung, Aaron Wan
A good command of academic vocabulary is important for academic success in higher education. However, research has primarily focused on the receptive academic vocabulary knowledge of L2 learners while devoting relatively limited attention to their productive use of such vocabulary and its impact on writing quality. To address this gap, we analysed the problem-solution essays written by 168 first-year undergraduates in Hong Kong, focusing on the relationship between their use of academic words in the Academic Vocabulary List (AVL) and the overall quality of their writing. We also explored the relationship between the size of students’ receptive academic vocabulary and the frequency of its use in writing. Findings revealed that essays with high scores contained a greater density and diversity of academic vocabulary than low-scored essays, with greater frequency of words in the 1–500 and 501–1000 tiers of the AVL significantly predicting better writing quality. The essays also showed a significant relationship between the participants’ receptive academic vocabulary size and the diversity of academic words used in writing. However, no significant relationship was observed between receptive academic vocabulary size and the density of academic words used. We highlight the implications of these findings for EAP teaching and research.
{"title":"Examining the use of academic vocabulary in first-year ESL undergraduates’ writing: A corpus-driven study in Hong Kong","authors":"Edsoulla Chung,&nbsp;Aaron Wan","doi":"10.1016/j.asw.2024.100913","DOIUrl":"10.1016/j.asw.2024.100913","url":null,"abstract":"<div><div>A good command of academic vocabulary is important for academic success in higher education. However, research has primarily focused on the receptive academic vocabulary knowledge of L2 learners while devoting relatively limited attention to their productive use of such vocabulary and its impact on writing quality. To address this gap, we analysed the problem-solution essays written by 168 first-year undergraduates in Hong Kong, focusing on the relationship between their use of academic words in the Academic Vocabulary List (AVL) and the overall quality of their writing. We also explored the relationship between the size of students’ receptive academic vocabulary and the frequency of its use in writing. Findings revealed that essays with high scores contained a greater density and diversity of academic vocabulary than low-scored essays, with greater frequency of words in the 1–500 and 501–1000 tiers of the AVL significantly predicting better writing quality. The essays also showed a significant relationship between the participants’ receptive academic vocabulary size and the diversity of academic words used in writing. However, no significant relationship was observed between receptive academic vocabulary size and the density of academic words used. We highlight the implications of these findings for EAP teaching and research.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100913"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A meta-analysis of relationships between syntactic features and writing performance and how the relationships vary by student characteristics and measurement features
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2024.100909
Jiali Wang, Young-Suk G. Kim, Joseph Hin Yan Lam, Molly Ann Leachman
Students’ proficiency in constructing sentences impacts the writing process and writing products. Linguistic demands in writing differ in terms of both student characteristics and measurement features. To identify various syntactic demands considering these features, we conducted a meta-analysis examining the relationships between syntactic features (complexity and accuracy) and writing performance (quality, productivity, and fluency) and moderating effects of both student characteristics and measurement features. A total of 109 studies (effect sizes: 871; the total number of participants: 24,628) met the inclusion criteria. Results showed that there was a weak relationship for syntactic accuracy (r = .25) and complexity (r = .16). Writers' characteristics, including grade level and language proficiency, and measurement features, writing genres, writing outcomes, whether the writing task is text-based or not, and type of syntactic complexity measures, were significant moderators for certain syntactic features. The findings highlighted the importance of writer and measurement factors when considering the relationships between linguistic features in writing and writing performance. Implications were discussed regarding the selection of syntactic features in assessing language use in writing, gaps in the literature, and significance for writing instruction and assessment.
{"title":"A meta-analysis of relationships between syntactic features and writing performance and how the relationships vary by student characteristics and measurement features","authors":"Jiali Wang,&nbsp;Young-Suk G. Kim,&nbsp;Joseph Hin Yan Lam,&nbsp;Molly Ann Leachman","doi":"10.1016/j.asw.2024.100909","DOIUrl":"10.1016/j.asw.2024.100909","url":null,"abstract":"<div><div>Students’ proficiency in constructing sentences impacts the writing process and writing products. Linguistic demands in writing differ in terms of both student characteristics and measurement features. To identify various syntactic demands considering these features, we conducted a meta-analysis examining the relationships between syntactic features (complexity and accuracy) and writing performance (quality, productivity, and fluency) and moderating effects of both student characteristics and measurement features. A total of 109 studies (effect sizes: 871; the total number of participants: 24,628) met the inclusion criteria. Results showed that there was a weak relationship for syntactic accuracy (r = .25) and complexity (r = .16). Writers' characteristics, including grade level and language proficiency, and measurement features, writing genres, writing outcomes, whether the writing task is text-based or not, and type of syntactic complexity measures, were significant moderators for certain syntactic features. The findings highlighted the importance of writer and measurement factors when considering the relationships between linguistic features in writing and writing performance. Implications were discussed regarding the selection of syntactic features in assessing language use in writing, gaps in the literature, and significance for writing instruction and assessment.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100909"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Volume 63
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2025.100917
Martin East , David Slomp
{"title":"Editorial Volume 63","authors":"Martin East ,&nbsp;David Slomp","doi":"10.1016/j.asw.2025.100917","DOIUrl":"10.1016/j.asw.2025.100917","url":null,"abstract":"","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100917"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of a genre and topic knowledge activation device on a standardized writing test performance 体裁和主题知识激活装置对标准化写作测试成绩的影响
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100898
Natalia Ávila Reyes , Diego Carrasco , Rosario Escribano , María Jesús Espinosa , Javiera Figueroa , Carolina Castillo
The aim of this article was twofold: first, to introduce a design for a writing test intended for application in large-scale assessments of writing, and second, to experimentally examine the effects of employing a device for activating prior knowledge of topic and genre as a means of controlling construct-irrelevant variance and enhancing validity. An authentic, situated writing task was devised, offering students a communicative purpose and a defined audience. Two devices were utilized for the cognitive activation of topic and genre knowledge: an infographic and a genre model. The participants in this study were 162 fifth-grade students from Santiago de Chile, with 78 students assigned to the experimental condition (with activation device) and 84 students assigned to the control condition (without activation device). The results demonstrate that the odds of presenting good writing ability are higher for students who were part of the experimental group, even when controlling for text transcription ability, considered a predictor of writing. These findings hold implications for the development of large-scale tests of writing guided by principles of educational and social justice.
本文的目的有二:首先,介绍一种用于大规模写作评估的写作测试设计;其次,通过实验检验采用激活主题和体裁的先验知识作为控制与建构无关的变异和提高有效性的手段的效果。我们设计了一个真实的情景写作任务,为学生提供了一个交际目的和明确的受众。为激活主题和体裁知识的认知,我们使用了两种工具:信息图表和体裁模型。这项研究的参与者是来自智利圣地亚哥的 162 名五年级学生,其中 78 名学生被分配到实验条件下(使用激活装置),84 名学生被分配到对照条件下(不使用激活装置)。结果表明,即使控制了被视为写作预测因素的文字转录能力,实验组学生表现出良好写作能力的几率也更高。这些发现对在教育和社会公正原则指导下开发大规模写作测试具有重要意义。
{"title":"Effects of a genre and topic knowledge activation device on a standardized writing test performance","authors":"Natalia Ávila Reyes ,&nbsp;Diego Carrasco ,&nbsp;Rosario Escribano ,&nbsp;María Jesús Espinosa ,&nbsp;Javiera Figueroa ,&nbsp;Carolina Castillo","doi":"10.1016/j.asw.2024.100898","DOIUrl":"10.1016/j.asw.2024.100898","url":null,"abstract":"<div><div>The aim of this article was twofold: first, to introduce a design for a writing test intended for application in large-scale assessments of writing, and second, to experimentally examine the effects of employing a device for activating prior knowledge of topic and genre as a means of controlling construct-irrelevant variance and enhancing validity. An authentic, situated writing task was devised, offering students a communicative purpose and a defined audience. Two devices were utilized for the cognitive activation of topic and genre knowledge: an infographic and a genre model. The participants in this study were 162 fifth-grade students from Santiago de Chile, with 78 students assigned to the experimental condition (with activation device) and 84 students assigned to the control condition (without activation device). The results demonstrate that the odds of presenting good writing ability are higher for students who were part of the experimental group, even when controlling for text transcription ability, considered a predictor of writing. These findings hold implications for the development of large-scale tests of writing guided by principles of educational and social justice.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100898"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and assessing AI-generated and human-produced texts: The case of second language writing teachers 检测和评估人工智能生成的文本和人类制作的文本:以第二语言写作教师为例
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100899
Loc Nguyen , Jessie S. Barrot
Artificial intelligence (AI) technologies have recently attracted the attention of second language (L2) writing scholars and practitioners. While they recognize the tool’s viability, they also raised the potential adverse effects of these tools on accurately reflecting students’ actual level of writing performance. It is, therefore, crucial for teachers to discern AI-generated essays from human-produced work for more accurate assessment. However, limited information is available about how they assess and distinguish between essays produced by AI and human authors. Thus, this study analyzed the scores and comments teachers gave and looked into their strategies for identifying the source of the essays. Findings showed that essays by a native English-speaking (NS) lecturer and ChatGPT were rated highly. Meanwhile, essays by an NS college student, non-native English-speaking (NNS) college student, and NNS lecturer scored lower, which made them distinguishable from an AI-generated text. The study also revealed that teachers could not consistently identify the AI-generated text, particularly those written by an NS professional. These findings were attributed to teachers’ past engagement with AI writing tools, familiarity with common L2 learner errors, and exposure to native and non-native English writing. From these results, implications for L2 writing instruction and future research are discussed.
人工智能(AI)技术最近引起了第二语言(L2)写作学者和从业人员的关注。他们在肯定这一工具可行性的同时,也提出了这些工具对准确反映学生实际写作水平的潜在不利影响。因此,对于教师来说,如何将人工智能生成的作文与人类的作品区分开来以进行更准确的评估至关重要。然而,关于教师如何评估和区分人工智能和人类作者的作文的信息却很有限。因此,本研究分析了教师给出的分数和评语,并研究了他们识别作文来源的策略。研究结果显示,英语为母语(NS)的讲师和 ChatGPT 撰写的论文获得了很高的评分。与此同时,由以英语为母语的大学生(NS)、非以英语为母语的大学生(NNS)和非以英语为母语的讲师撰写的论文得分较低,这使他们可以与人工智能生成的文本区分开来。研究还显示,教师无法一致地识别人工智能生成的文本,尤其是那些由 NS 专业人员撰写的文本。这些发现归因于教师过去使用过人工智能写作工具,熟悉第二语言学习者的常见错误,以及接触过母语和非母语英语写作。根据这些结果,我们讨论了对 L2 写作教学和未来研究的影响。
{"title":"Detecting and assessing AI-generated and human-produced texts: The case of second language writing teachers","authors":"Loc Nguyen ,&nbsp;Jessie S. Barrot","doi":"10.1016/j.asw.2024.100899","DOIUrl":"10.1016/j.asw.2024.100899","url":null,"abstract":"<div><div>Artificial intelligence (AI) technologies have recently attracted the attention of second language (L2) writing scholars and practitioners. While they recognize the tool’s viability, they also raised the potential adverse effects of these tools on accurately reflecting students’ actual level of writing performance. It is, therefore, crucial for teachers to discern AI-generated essays from human-produced work for more accurate assessment. However, limited information is available about how they assess and distinguish between essays produced by AI and human authors. Thus, this study analyzed the scores and comments teachers gave and looked into their strategies for identifying the source of the essays. Findings showed that essays by a native English-speaking (NS) lecturer and ChatGPT were rated highly. Meanwhile, essays by an NS college student, non-native English-speaking (NNS) college student, and NNS lecturer scored lower, which made them distinguishable from an AI-generated text. The study also revealed that teachers could not consistently identify the AI-generated text, particularly those written by an NS professional. These findings were attributed to teachers’ past engagement with AI writing tools, familiarity with common L2 learner errors, and exposure to native and non-native English writing. From these results, implications for L2 writing instruction and future research are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100899"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative study of voice in Chinese English-major undergraduates’ timed and untimed argument writing 中国英语专业本科生定时与非定时议论文写作中语音的比较研究
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100896
Xiangmin Zeng , Jie Liu , Neil Evan Jon Anthony Bowen
As a somewhat elusive and occlusive concept, voice can be a challenging and formidable hurdle for second language (L2) writers. One area that exemplifies this struggle is timed argument writing, where authors must position claims, ideas, and individual perspectives to an existing knowledge base and scholarly community under the confines of time. To enrich our understandings of voice construction in L2 English writers’ timed writing, we explored how 41 Chinese English-major undergraduates deployed authorial voice in two prompt-based argument writing tasks (timed versus untimed). We also sampled their self-reported knowledge, use, and understanding of voice through a survey-based instrument. To compare the quantity and quality of voice construction between the two samples, we measured 10 voice categories, three voice dimensions, and overall voice strength. Results showed that only two categories displayed statistically significant differences in terms of frequencies, but all three voice dimensions and overall voice strength scored significantly higher in untimed writing samples. Based on the results of our text analysis and survey, we further highlight the complexities of voice in L2 writing, provide evidence in support of existing voice rubrics, and make practical suggestions for teaching and assessing voice in writing.
对于第二语言(L2)写作者来说,"声音 "是一个难以捉摸的隐性概念,是一个具有挑战性的巨大障碍。定时论证写作就是一个很好的例子,在这种写作中,作者必须在时间的限制下,将自己的主张、观点和个人观点定位到现有的知识基础和学术界。为了丰富我们对后天英语写作者定时写作中语音建构的理解,我们探讨了 41 名中国英语专业本科生如何在两个基于提示的论证写作任务(定时与非定时)中使用作者语音。我们还通过调查问卷对他们自我报告的对语音的认识、使用和理解进行了抽样调查。为了比较两个样本之间语音构建的数量和质量,我们测量了 10 个语音类别、三个语音维度和整体语音强度。结果显示,只有两个类别在频率上显示出统计学上的显著差异,但在未计时写作样本中,所有三个语音维度和整体语音强度的得分都明显较高。基于文本分析和调查的结果,我们进一步强调了第二语言写作中语音的复杂性,为现有的语音评分标准提供了支持证据,并为写作语音的教学和评估提出了切实可行的建议。
{"title":"A comparative study of voice in Chinese English-major undergraduates’ timed and untimed argument writing","authors":"Xiangmin Zeng ,&nbsp;Jie Liu ,&nbsp;Neil Evan Jon Anthony Bowen","doi":"10.1016/j.asw.2024.100896","DOIUrl":"10.1016/j.asw.2024.100896","url":null,"abstract":"<div><div>As a somewhat elusive and occlusive concept, voice can be a challenging and formidable hurdle for second language (L2) writers. One area that exemplifies this struggle is timed argument writing, where authors must position claims, ideas, and individual perspectives to an existing knowledge base and scholarly community under the confines of time. To enrich our understandings of voice construction in L2 English writers’ timed writing, we explored how 41 Chinese English-major undergraduates deployed authorial voice in two prompt-based argument writing tasks (timed versus untimed). We also sampled their self-reported knowledge, use, and understanding of voice through a survey-based instrument. To compare the quantity and quality of voice construction between the two samples, we measured 10 voice categories, three voice dimensions, and overall voice strength. Results showed that only two categories displayed statistically significant differences in terms of frequencies, but all three voice dimensions and overall voice strength scored significantly higher in untimed writing samples. Based on the results of our text analysis and survey, we further highlight the complexities of voice in L2 writing, provide evidence in support of existing voice rubrics, and make practical suggestions for teaching and assessing voice in writing.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100896"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of task duration on the scoring of independent writing responses of adult L2-English writers 任务持续时间对英语为第二语言的成年写作者独立写作回答评分的影响
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100895
Ben Naismith , Yigal Attali , Geoffrey T. LaFlair
In writing assessment, there is inherently a tension between authenticity and practicality: tasks with longer durations may more closely reflect real-life writing processes but are less feasible to administer and score. What is more, given total testing time, there is necessarily a trade-off between task duration and number of tasks. Traditionally, high-stakes assessments have managed this trade-off by administering one or two writing tasks each test, allowing 20–40 minutes per task. However, research on second language (L2) English writing has not found longer task durations to significantly improve score validity or reliability. Importantly, very few studies have compared much shorter durations for writing tasks to more traditional allotments. To explore this issue, we asked adult L2-English test takers to respond to two writing prompts with either 5-minute or 20-minute time limits. Responses were then evaluated by expert human raters and an automated writing evaluation tool. Regardless of scoring method, short duration scores evidenced equally high test-retest reliability and criterion validity as long duration scores. As expected, longer task duration yielded higher scores, but regardless of duration, test takers demonstrated the entire spectrum of writing proficiency. Implications for writing assessment are discussed in relation to scoring practices and task design.
在写作评估中,真实性和实用性之间存在着内在的矛盾:持续时间较长的任务可能更能反映真实的写作过程,但在实施和评分方面却不太可行。更重要的是,考虑到总的测试时间,必须在任务持续时间和任务数量之间做出权衡。传统上,高风险评估都是通过每次测试一到两个写作任务,每个任务 20-40 分钟的时间来实现这种权衡的。然而,对第二语言(L2)英语写作的研究并未发现较长的任务持续时间能显著提高分数的有效性或可靠性。重要的是,很少有研究将更短的写作任务时间与更传统的任务分配时间进行比较。为了探究这个问题,我们要求英语为第二语言的成年应试者在 5 分钟或 20 分钟的时间限制内回答两个写作提示。然后由人类专家评分员和自动写作评估工具对回答进行评估。无论采用哪种评分方法,短时段得分与长时段得分一样,都具有很高的测试再测可靠性和标准效度。正如预期的那样,任务持续时间越长,得分越高,但无论持续时间长短,应试者都能表现出全面的写作能力。本文讨论了评分方法和任务设计对写作评估的影响。
{"title":"The impact of task duration on the scoring of independent writing responses of adult L2-English writers","authors":"Ben Naismith ,&nbsp;Yigal Attali ,&nbsp;Geoffrey T. LaFlair","doi":"10.1016/j.asw.2024.100895","DOIUrl":"10.1016/j.asw.2024.100895","url":null,"abstract":"<div><div>In writing assessment, there is inherently a tension between authenticity and practicality: tasks with longer durations may more closely reflect real-life writing processes but are less feasible to administer and score. What is more, given total testing time, there is necessarily a trade-off between task duration and number of tasks. Traditionally, high-stakes assessments have managed this trade-off by administering one or two writing tasks each test, allowing 20–40 minutes per task. However, research on second language (L2) English writing has not found longer task durations to significantly improve score validity or reliability. Importantly, very few studies have compared much shorter durations for writing tasks to more traditional allotments. To explore this issue, we asked adult L2-English test takers to respond to two writing prompts with either 5-minute or 20-minute time limits. Responses were then evaluated by expert human raters and an automated writing evaluation tool. Regardless of scoring method, short duration scores evidenced equally high test-retest reliability and criterion validity as long duration scores. As expected, longer task duration yielded higher scores, but regardless of duration, test takers demonstrated the entire spectrum of writing proficiency. Implications for writing assessment are discussed in relation to scoring practices and task design.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100895"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Assessing Writing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1