Pub Date : 2025-07-01Epub Date: 2025-05-31DOI: 10.1016/j.asw.2025.100955
Hieu Manh Do
Recent developments in artificial intelligence (AI) have led to the emergence of chatbots as an effective tool for language learning. One such tool is Google Gemini, which engages writers and researchers in natural and human-like interactive experiences. Google Gemini offers significant benefits for improving efficiency and collaboration in academic writing but also presents challenges related to accuracy, ethical considerations, and potential impacts on writer creativity. Thus, this tech review aims to explore the potential benefits and limitations of Google Gemini in writing. This review also concludes with recommendations for writing instructors and suggestions for future researchers in the field.
{"title":"Potentials and pitfalls of Google Gemini in writing: Implications for educators","authors":"Hieu Manh Do","doi":"10.1016/j.asw.2025.100955","DOIUrl":"10.1016/j.asw.2025.100955","url":null,"abstract":"<div><div>Recent developments in artificial intelligence (AI) have led to the emergence of chatbots as an effective tool for language learning. One such tool is Google Gemini, which engages writers and researchers in natural and human-like interactive experiences. Google Gemini offers significant benefits for improving efficiency and collaboration in academic writing but also presents challenges related to accuracy, ethical considerations, and potential impacts on writer creativity. Thus, this tech review aims to explore the potential benefits and limitations of Google Gemini in writing. This review also concludes with recommendations for writing instructors and suggestions for future researchers in the field.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100955"},"PeriodicalIF":4.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144184998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-05-06DOI: 10.1016/j.asw.2025.100948
Seohyeon Choi, Kristen L. McMaster, Nana Kim
Curriculum-based measurement (CBM) is a valuable method for assessing students with intensive learning needs, including writing. However, research on English writing CBMs has paid insufficient attention to linguistic diversity, especially among young or beginning writers, raising questions about the validity of CBMs in evaluating multilingual students’ early writing development in English. The purpose of this study was to evaluate the measurement invariance of Word Dictation, a CBM writing task measuring English transcription skills at the word level, across multilingual and English-monolingual students with intensive writing needs in the U.S. Using data from 349 students, primarily in Grades 1–3, we evaluated measurement invariance at both item and assessment levels. Using different scoring metrics and various analytical methods, results revealed a few items as potentially displaying differential item functioning. Results also showed that, at the assessment level, Word Dictation did not function differently across the two student groups. The findings provide important evidence supporting the measure’s validity, fairness, and its CBM Stage 1 technical adequacy. We discuss the limitations of the study, along with future research directions and implications for educators using Word Dictation to serve linguistically diverse students requiring intensive support in developing English writing skills.
{"title":"Toward the fair and valid use of curriculum-based measurement for students with intensive writing needs and linguistically diverse backgrounds","authors":"Seohyeon Choi, Kristen L. McMaster, Nana Kim","doi":"10.1016/j.asw.2025.100948","DOIUrl":"10.1016/j.asw.2025.100948","url":null,"abstract":"<div><div>Curriculum-based measurement (CBM) is a valuable method for assessing students with intensive learning needs, including writing. However, research on English writing CBMs has paid insufficient attention to linguistic diversity, especially among young or beginning writers, raising questions about the validity of CBMs in evaluating multilingual students’ early writing development in English. The purpose of this study was to evaluate the measurement invariance of Word Dictation, a CBM writing task measuring English transcription skills at the word level, across multilingual and English-monolingual students with intensive writing needs in the U.S. Using data from 349 students, primarily in Grades 1–3, we evaluated measurement invariance at both item and assessment levels. Using different scoring metrics and various analytical methods, results revealed a few items as potentially displaying differential item functioning. Results also showed that, at the assessment level, Word Dictation did not function differently across the two student groups. The findings provide important evidence supporting the measure’s validity, fairness, and its CBM Stage 1 technical adequacy. We discuss the limitations of the study, along with future research directions and implications for educators using Word Dictation to serve linguistically diverse students requiring intensive support in developing English writing skills.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100948"},"PeriodicalIF":4.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143912603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-03-22DOI: 10.1016/j.asw.2025.100935
Xiaolong Cheng , Jinfen Xu
While there are copious studies investigating teacher written feedback in L2 writing contexts, much remains to be discovered about how English-L1 teachers enact this practice in EFL classrooms. To fill this gap, employing a mixed-methods approach, this study collected data from multiple sources including questionnaires, semi-structured interviews, students’ writing samples, stimulated recalls, and documents to examine such teachers’ implementation of written feedback and influencing factors in Chinese tertiary EFL settings. The results of survey study were generally in line with those of in-depth study in terms of feedback scope, strategy, and focus, but differences emerged in feedback orientation. Furthermore, both the quantitative and qualitative results found that the teachers’ provision of written feedback was mediated by a synthesis of factors related to teachers, students, and contexts. Important pedagogical implications are also discussed.
{"title":"A mixed-methods approach to English-L1 teachers’ implementation of written feedback in EFL classrooms","authors":"Xiaolong Cheng , Jinfen Xu","doi":"10.1016/j.asw.2025.100935","DOIUrl":"10.1016/j.asw.2025.100935","url":null,"abstract":"<div><div>While there are copious studies investigating teacher written feedback in L2 writing contexts, much remains to be discovered about how English-L1 teachers enact this practice in EFL classrooms. To fill this gap, employing a mixed-methods approach, this study collected data from multiple sources including questionnaires, semi-structured interviews, students’ writing samples, stimulated recalls, and documents to examine such teachers’ implementation of written feedback and influencing factors in Chinese tertiary EFL settings. The results of survey study were generally in line with those of in-depth study in terms of feedback scope, strategy, and focus, but differences emerged in feedback orientation. Furthermore, both the quantitative and qualitative results found that the teachers’ provision of written feedback was mediated by a synthesis of factors related to teachers, students, and contexts. Important pedagogical implications are also discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100935"},"PeriodicalIF":4.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-03-26DOI: 10.1016/j.asw.2025.100936
Xian Liao , Pengfei Zhao , Zicheng Li
An accurate assessment of writing relies on a thorough understanding of its underlying processes and related factors. While integrated writing (IW) is crucial for students’ academic success and future career development, the factors influencing performance in such complex tasks remain under scientific investigation. In particular, although the core role of source use in completing IW tasks is widely acknowledged, we still need to explore factors that could facilitate writers’ effective use of sources. While recent studies have highlighted the critical role of executive functions (EFs)—such as working memory, inhibition, and cognitive flexibility—during writing activities, the exact influence of these foundational cognitive skills on source use and IW performance remains unclear. To this end, this study recruited 233 secondary students in Hong Kong to complete a set of standardized EF tasks and a Chinese reading-to-write IW task. The students’ written products were analyzed regarding the use of content ideas and linguistic transformation based on source materials. We found that visual-spatial working memory had a significant direct effect on IW performance. Two critical aspects of source use—ideas from sources and near copy—mediated the relationship between EF skills and IW performance. These findings contribute to our understanding of the role of EF skills in complex IW tasks. We highlight the implications of our results for the assessment, teaching, and learning of integrated writing.
{"title":"The relationship between executive functions, source use, and integrated writing performance","authors":"Xian Liao , Pengfei Zhao , Zicheng Li","doi":"10.1016/j.asw.2025.100936","DOIUrl":"10.1016/j.asw.2025.100936","url":null,"abstract":"<div><div>An accurate assessment of writing relies on a thorough understanding of its underlying processes and related factors. While integrated writing (IW) is crucial for students’ academic success and future career development, the factors influencing performance in such complex tasks remain under scientific investigation. In particular, although the core role of source use in completing IW tasks is widely acknowledged, we still need to explore factors that could facilitate writers’ effective use of sources. While recent studies have highlighted the critical role of executive functions (EFs)—such as working memory, inhibition, and cognitive flexibility—during writing activities, the exact influence of these foundational cognitive skills on source use and IW performance remains unclear. To this end, this study recruited 233 secondary students in Hong Kong to complete a set of standardized EF tasks and a Chinese reading-to-write IW task. The students’ written products were analyzed regarding the use of content ideas and linguistic transformation based on source materials. We found that visual-spatial working memory had a significant direct effect on IW performance. Two critical aspects of source use—ideas from sources and near copy—mediated the relationship between EF skills and IW performance. These findings contribute to our understanding of the role of EF skills in complex IW tasks. We highlight the implications of our results for the assessment, teaching, and learning of integrated writing.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100936"},"PeriodicalIF":4.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-05-28DOI: 10.1016/j.asw.2025.100952
Fengkai Liu , Xiaofei Lu , Tan Jin
Research has shown that continuation writing assessment tasks place considerable demands on tester-takers’ vocabulary, especially in constructing coherent story plots and employing vivid language. Traditionally, learners have limited access to model continuations, and teacher feedback on vocabulary usage often falls short of guiding learners in selecting and using diverse words appropriate for specific contexts. To address these challenges, this paper introduces a ChatGPT-assisted platform designed to facilitate the learning of the meanings, functions, and usage of frequent core words in continuation writing assessment tasks. We further explore the pedagogical possibilities of this platform and discuss its limitations and implications for future research.
{"title":"Using ChatGPT to facilitate vocabulary learning in continuation writing assessment tasks","authors":"Fengkai Liu , Xiaofei Lu , Tan Jin","doi":"10.1016/j.asw.2025.100952","DOIUrl":"10.1016/j.asw.2025.100952","url":null,"abstract":"<div><div>Research has shown that continuation writing assessment tasks place considerable demands on tester-takers’ vocabulary, especially in constructing coherent story plots and employing vivid language. Traditionally, learners have limited access to model continuations, and teacher feedback on vocabulary usage often falls short of guiding learners in selecting and using diverse words appropriate for specific contexts. To address these challenges, this paper introduces a ChatGPT-assisted platform designed to facilitate the learning of the meanings, functions, and usage of frequent core words in continuation writing assessment tasks. We further explore the pedagogical possibilities of this platform and discuss its limitations and implications for future research.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100952"},"PeriodicalIF":4.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144166229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-05-08DOI: 10.1016/j.asw.2025.100947
Jia He , Jun Xia , Chun-mei Zhang , Jian-nan Liu
Peer review training is reported as an important scaffolding for students’ peer review practices in second language (L2) writing research, yet its effect on L2 learners’ cognitive engagement with peer feedback requires further exploration. This study examined the impact of ongoing peer review training on 45 Chinese EFL undergraduates’ cognitive engagement in peer reviews at a 12-week English public speaking and writing course. Drawing on multiple data, including reflective journals, written peer feedback and semi-structured retrospective interviews, this mixed-methods study found that students developed an enhanced awareness of deeper-level, content-related writing problems and peer feedback after receiving peer review training. However, understanding peer feedback occurred later than noticing writing problems. The employment of cognitive and meta-cognitive strategies varied throughout the training sessions, with initial emphasis on analyzing, evaluating, monitoring, and reflecting, and later collective incorporation of comparing and integrating. The quality of post-training written peer feedback also triangulated the enhancement of cognitive engagement. These findings extend previous research that students had deeper cognitive feedback engagement over time, by unveiling asynchronous awareness and evolving cognitive and meta-cognitive operations, and indicated the critical role of language teachers in promoting students’ cognitive engagement in peer reviews in the EFL context.
{"title":"Promoting cognitive engagement with peer feedback through peer review training: The case of Chinese tertiary-level EFL learners","authors":"Jia He , Jun Xia , Chun-mei Zhang , Jian-nan Liu","doi":"10.1016/j.asw.2025.100947","DOIUrl":"10.1016/j.asw.2025.100947","url":null,"abstract":"<div><div>Peer review training is reported as an important scaffolding for students’ peer review practices in second language (L2) writing research, yet its effect on L2 learners’ cognitive engagement with peer feedback requires further exploration. This study examined the impact of ongoing peer review training on 45 Chinese EFL undergraduates’ cognitive engagement in peer reviews at a 12-week English public speaking and writing course. Drawing on multiple data, including reflective journals, written peer feedback and semi-structured retrospective interviews, this mixed-methods study found that students developed an enhanced awareness of deeper-level, content-related writing problems and peer feedback after receiving peer review training. However, understanding peer feedback occurred later than noticing writing problems. The employment of cognitive and meta-cognitive strategies varied throughout the training sessions, with initial emphasis on analyzing, evaluating, monitoring, and reflecting, and later collective incorporation of comparing and integrating. The quality of post-training written peer feedback also triangulated the enhancement of cognitive engagement. These findings extend previous research that students had deeper cognitive feedback engagement over time, by unveiling asynchronous awareness and evolving cognitive and meta-cognitive operations, and indicated the critical role of language teachers in promoting students’ cognitive engagement in peer reviews in the EFL context.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100947"},"PeriodicalIF":4.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143918349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-06-03DOI: 10.1016/j.asw.2025.100954
Scott A. Crossley , Perpetual Baffour , L. Burleigh , Jules King
This paper introduces ASAP 2.0, a dataset of ∼25,000 source-based argumentative essays from U.S. secondary students. The corpus addresses the shortcomings of the original ASAP corpus by including demographic data, consistent scoring rubrics, and source texts. ASAP 2.0 aims to support the development of unbiased, sophisticated Automatic Essay Scoring (AES) systems that can foster improved educational practices by providing summative to students. The corpus is designed for broad accessibility with the hope of facilitating research into writing quality and AES system biases.
{"title":"A large-scale corpus for assessing source-based writing quality: ASAP 2.0","authors":"Scott A. Crossley , Perpetual Baffour , L. Burleigh , Jules King","doi":"10.1016/j.asw.2025.100954","DOIUrl":"10.1016/j.asw.2025.100954","url":null,"abstract":"<div><div>This paper introduces ASAP 2.0, a dataset of ∼25,000 source-based argumentative essays from U.S. secondary students. The corpus addresses the shortcomings of the original ASAP corpus by including demographic data, consistent scoring rubrics, and source texts. ASAP 2.0 aims to support the development of unbiased, sophisticated Automatic Essay Scoring (AES) systems that can foster improved educational practices by providing summative to students. The corpus is designed for broad accessibility with the hope of facilitating research into writing quality and AES system biases.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100954"},"PeriodicalIF":4.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144194685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-01-29DOI: 10.1016/j.asw.2025.100915
Lujie Zheng , Sheena Kaur , Azlin Zaiti Zainal
Phraseological knowledge has gained popularity as a critical predictor of writing assessment in second and foreign language (L2/FL) learner corpus research. However, past phraseological studies on learners with different levels of language competency have overlooked multidimensional collocational indices and the potential influence of cognitive characteristics. This study, employing multiple collocational measures, tracks the growth of adjective-noun combinations in the English argumentative writings of a longitudinal cohort of 148 Chinese EFL learners over four months and explores the effects of language proficiency and working memory (WM) on their phraseological growth. Our findings revealed a general upward pattern in learners’ overall development, despite some slight fluctuations. Notably, the mixed-effects models indicated that time alone had a negative impact on learners’ use of high-frequency, diverse, and strongly associated combinations. However, language proficiency and WM modulated this process, as learners with higher proficiency or greater WM demonstrated temporal improvement across most indices. The interplay among time, language proficiency, and WM presented a more complex image in which high-proficient learners showed a sloping trend on all collocational variables as WM capacity increased, suggesting a potential impact of cognitive overload. These findings offer valuable insights for teaching and identify prospective directions for future research into phraseological knowledge development.
{"title":"The influence of working memory and proficiency on phraseological growth: A longitudinal study of adjective-noun combinations in Chinese EFL learners’ argumentative writing","authors":"Lujie Zheng , Sheena Kaur , Azlin Zaiti Zainal","doi":"10.1016/j.asw.2025.100915","DOIUrl":"10.1016/j.asw.2025.100915","url":null,"abstract":"<div><div>Phraseological knowledge has gained popularity as a critical predictor of writing assessment in second and foreign language (L2/FL) learner corpus research. However, past phraseological studies on learners with different levels of language competency have overlooked multidimensional collocational indices and the potential influence of cognitive characteristics. This study, employing multiple collocational measures, tracks the growth of adjective-noun combinations in the English argumentative writings of a longitudinal cohort of 148 Chinese EFL learners over four months and explores the effects of language proficiency and working memory (WM) on their phraseological growth. Our findings revealed a general upward pattern in learners’ overall development, despite some slight fluctuations. Notably, the mixed-effects models indicated that time alone had a negative impact on learners’ use of high-frequency, diverse, and strongly associated combinations. However, language proficiency and WM modulated this process, as learners with higher proficiency or greater WM demonstrated temporal improvement across most indices. The interplay among time, language proficiency, and WM presented a more complex image in which high-proficient learners showed a sloping trend on all collocational variables as WM capacity increased, suggesting a potential impact of cognitive overload. These findings offer valuable insights for teaching and identify prospective directions for future research into phraseological knowledge development.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"64 ","pages":"Article 100915"},"PeriodicalIF":4.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}