Pub Date : 2024-07-01Epub Date: 2024-05-31DOI: 10.1016/j.asw.2024.100864
Kai Guo
Peer feedback plays an important role in promoting learning in the writing classroom. However, providing high-quality feedback can be demanding for student reviewers. To address this challenge, this article proposes an AI-enhanced approach to peer feedback provision. I introduce EvaluMate, a newly developed online peer review system that leverages ChatGPT, a large language model (LLM), to scaffold student reviewers’ feedback generation. I discuss the design and functionality of EvaluMate, highlighting its affordances in supporting student reviewers’ provision of comments on peers’ essays. I also address the system’s limitations and propose potential solutions. Furthermore, I recommend future research on students’ engagement with this learning approach and its impact on learning outcomes. By presenting EvaluMate, I aim to inspire researchers and practitioners to explore the potential of AI technology in the teaching, learning, and assessment of writing.
{"title":"EvaluMate: Using AI to support students’ feedback provision in peer assessment for writing","authors":"Kai Guo","doi":"10.1016/j.asw.2024.100864","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100864","url":null,"abstract":"<div><p>Peer feedback plays an important role in promoting learning in the writing classroom. However, providing high-quality feedback can be demanding for student reviewers. To address this challenge, this article proposes an AI-enhanced approach to peer feedback provision. I introduce EvaluMate, a newly developed online peer review system that leverages ChatGPT, a large language model (LLM), to scaffold student reviewers’ feedback generation. I discuss the design and functionality of EvaluMate, highlighting its affordances in supporting student reviewers’ provision of comments on peers’ essays. I also address the system’s limitations and propose potential solutions. Furthermore, I recommend future research on students’ engagement with this learning approach and its impact on learning outcomes. By presenting EvaluMate, I aim to inspire researchers and practitioners to explore the potential of AI technology in the teaching, learning, and assessment of writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100864"},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-30DOI: 10.1016/j.asw.2024.100848
Rabail Qayyum
Research into diagnostic assessment of writing has largely ignored how diagnostic feedback information leads to differentiated instruction and learning. This case study research presents a teacher’s account of validating an in-house diagnostic assessment procedure in an English for Academic Purposes writing course with a view to refining it. I developed a validity argument and gathered and interpreted related evidence, focusing on one student’s performance in and perception of the assessment. The analysis revealed that to an extent the absence of proper feedback mechanisms limited the use of the test, somewhat weakened its impact, and reduced the potential for learning. I propose a modification to the assessment procedure involving a sample student feedback report.
{"title":"A teacher’s inquiry into diagnostic assessment in an EAP writing course","authors":"Rabail Qayyum","doi":"10.1016/j.asw.2024.100848","DOIUrl":"10.1016/j.asw.2024.100848","url":null,"abstract":"<div><p>Research into diagnostic assessment of writing has largely ignored how diagnostic feedback information leads to differentiated instruction and learning. This case study research presents a teacher’s account of validating an in-house diagnostic assessment procedure in an English for Academic Purposes writing course with a view to refining it. I developed a validity argument and gathered and interpreted related evidence, focusing on one student’s performance in and perception of the assessment. The analysis revealed that to an extent the absence of proper feedback mechanisms limited the use of the test, somewhat weakened its impact, and reduced the potential for learning. I propose a modification to the assessment procedure involving a sample student feedback report.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100848"},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-25DOI: 10.1016/j.asw.2024.100876
Carrie Xin Peng
Despite the burgeoning research on exploring learner engagement with feedback, how second language (L2) learners’ engagement with feedback in different processing conditions influences their subsequent writing development is under-explored. This study examines the effects of individual and collaborative processing (languaging) of teacher feedback on Chinese lower-secondary school EFL learners’ writing development. Eighty-one students aged 13–14 with A1-A2 levels of English proficiency (according to the Common European Framework of Reference) from two classes and two experienced English teachers participated in the study. Students were provided with comprehensive teacher feedback and were asked to process feedback provided on three writing tasks through either individual written or collaborative oral languaging over six weeks. Pre-, post-, and delayed post-tests were administered. Students’ writing development was analysed using complexity, accuracy, and fluency measures, as well as content and organisation writing scores. Findings showed that the two conditions did not influence students’ writing complexity and fluency differently, while only the collaborative oral languaging condition contributed to students’ sustainable accuracy gains. Results based on the analytic writing scores suggested that students in the two conditions significantly improved content and organisation scores over time. Pedagogical and research implications regarding implementing the two feedback processing conditions are discussed.
{"title":"Beyond accuracy gains: Investigating the impact of individual and collaborative feedback processing on L2 writing development","authors":"Carrie Xin Peng","doi":"10.1016/j.asw.2024.100876","DOIUrl":"10.1016/j.asw.2024.100876","url":null,"abstract":"<div><p>Despite the burgeoning research on exploring learner engagement with feedback, how second language (L2) learners’ engagement with feedback in different processing conditions influences their subsequent writing development is under-explored. This study examines the effects of individual and collaborative processing (languaging) of teacher feedback on Chinese lower-secondary school EFL learners’ writing development. Eighty-one students aged 13–14 with A1-A2 levels of English proficiency (according to the Common European Framework of Reference) from two classes and two experienced English teachers participated in the study. Students were provided with comprehensive teacher feedback and were asked to process feedback provided on three writing tasks through either individual written or collaborative oral languaging over six weeks. Pre-, post-, and delayed post-tests were administered. Students’ writing development was analysed using complexity, accuracy, and fluency measures, as well as content and organisation writing scores. Findings showed that the two conditions did not influence students’ writing complexity and fluency differently, while only the collaborative oral languaging condition contributed to students’ sustainable accuracy gains. Results based on the analytic writing scores suggested that students in the two conditions significantly improved content and organisation scores over time. Pedagogical and research implications regarding implementing the two feedback processing conditions are discussed.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100876"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000692/pdfft?md5=f166583fc6f801f5b2a71d5e058cf1ce&pid=1-s2.0-S1075293524000692-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141844063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-02DOI: 10.1016/j.asw.2024.100871
Murad Abdu Saeed , Atef AbuSa'aleek , Mohammed Abdullah Alharbi
Technology facilitates teacher corrective feedback on students' writing, but there is a need to examine how written, audio and screencast modes affect teacher's evaluative language of electronic (e-) feedback from linguistic approaches. By using the engagement resources of the appraisal framework within Systemic Functional Linguistics, this study examined the effect of written, audio and screencast modes on the instructor's evaluative language in his e-feedback on writing and the text revisions of 15 pairs of Saudi EFL learners. The linguistic analysis of the e-feedback revealed that the instructor's engagement resources differed across the three e-feedback modes. Specifically, the screencast and audio e-feedback modes were dominated by expanding resources (resources expanding the space for dialogue) as opposed to the prevalence of contracting resources (resources limiting/shutting down the space for dialogue) in the written feedback mode. Moreover, the audio and screencast feedback modes contained more statements and suggestions whereas the written feedback mode was dominated by commands/orders and suggested corrections. The content analysis revealed that the screencast e-feedback mode addressed a higher number of global issues in writing; however, the audio and written e-feedback modes addressed a higher number of local issues in writing. Despite the higher overall rate of successful text revisions resulting from the screencast and audio e-feedback modes, no significant differences were found except in relation to students' global text revisions. The study offers useful pedagogical implications for instructors in effectively responding to students' writing.
{"title":"Examining teacher’s evaluative language in written, audio and screencast feedback on EFL learners’ writing from the appraisal framework: A linguistic perspective","authors":"Murad Abdu Saeed , Atef AbuSa'aleek , Mohammed Abdullah Alharbi","doi":"10.1016/j.asw.2024.100871","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100871","url":null,"abstract":"<div><p>Technology facilitates teacher corrective feedback on students' writing, but there is a need to examine how written, audio and screencast modes affect teacher's evaluative language of electronic (e-) feedback from linguistic approaches. By using the engagement resources of the appraisal framework within Systemic Functional Linguistics, this study examined the effect of written, audio and screencast modes on the instructor's evaluative language in his e-feedback on writing and the text revisions of 15 pairs of Saudi EFL learners. The linguistic analysis of the e-feedback revealed that the instructor's engagement resources differed across the three e-feedback modes. Specifically, the screencast and audio e-feedback modes were dominated by expanding resources (resources expanding the space for dialogue) as opposed to the prevalence of contracting resources (resources limiting/shutting down the space for dialogue) in the written feedback mode. Moreover, the audio and screencast feedback modes contained more statements and suggestions whereas the written feedback mode was dominated by commands/orders and suggested corrections. The content analysis revealed that the screencast e-feedback mode addressed a higher number of global issues in writing; however, the audio and written e-feedback modes addressed a higher number of local issues in writing. Despite the higher overall rate of successful text revisions resulting from the screencast and audio e-feedback modes, no significant differences were found except in relation to students' global text revisions. The study offers useful pedagogical implications for instructors in effectively responding to students' writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100871"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141543227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-25DOI: 10.1016/j.asw.2024.100875
Yuxin Peng , Yafu Zheng , Jie Sun , Yue Jiang , Jiexin Lin , Haomin Zhang
The current study investigated the relationships among large-grained and fine-grained aspects of absolute syntactic complexity (SC) and expert-assessed writing quality of 446 argumentative writing samples of college-level Chinese EFL learners. Computational indices tapping into large-grained and fine-grained aspects of absolute SC were computed by TAASSC and L2SCA. Drawing upon rigorous SEM analyses, this paper demonstrated the utility of computational indices that tap into absolute SC. Overall, the measurements of absolute SC accounted for 42 % of the variance in human-judged overall writing scores. The results revealed that (1) noun phrase (NP) complexity was the underlying cause that determined trained raters’ judgement on argumentative writing quality; (2) among traditional large-grained indices, MLC, CN/C, and CN/T, were dependable metrics in representing SC and predicting writing quality; (3) among fine-grained indices, prepositional phrases and relative clauses as noun modifiers were prominent in representing NP complexity; (4) relative clause and adjectival modifiers had unique and complementary effects to large-grained NP complexity in affording explanations for human judgement; (5) the use of prepositions in NP was the most prominent contributor to the increase of large-grained NP complexity among the noun phrase modifiers in this specific corpus. Situated in previous research, the results provide an opportunity to evaluate L2 writing within the theoretical framework of absolute syntactic complexity.
{"title":"Modeling relationships among large-grained, fine-grained absolute syntactic complexity and assessed L2 writing quality: An SEM approach","authors":"Yuxin Peng , Yafu Zheng , Jie Sun , Yue Jiang , Jiexin Lin , Haomin Zhang","doi":"10.1016/j.asw.2024.100875","DOIUrl":"10.1016/j.asw.2024.100875","url":null,"abstract":"<div><p>The current study investigated the relationships among large-grained and fine-grained aspects of absolute syntactic complexity (SC) and expert-assessed writing quality of 446 argumentative writing samples of college-level Chinese EFL learners. Computational indices tapping into large-grained and fine-grained aspects of absolute SC were computed by TAASSC and L2SCA. Drawing upon rigorous SEM analyses, this paper demonstrated the utility of computational indices that tap into absolute SC. Overall, the measurements of absolute SC accounted for 42 % of the variance in human-judged overall writing scores. The results revealed that (1) noun phrase (NP) complexity was the underlying cause that determined trained raters’ judgement on argumentative writing quality; (2) among traditional large-grained indices, MLC, CN/C, and CN/T, were dependable metrics in representing SC and predicting writing quality; (3) among fine-grained indices, prepositional phrases and relative clauses as noun modifiers were prominent in representing NP complexity; (4) relative clause and adjectival modifiers had unique and complementary effects to large-grained NP complexity in affording explanations for human judgement; (5) the use of prepositions in NP was the most prominent contributor to the increase of large-grained NP complexity among the noun phrase modifiers in this specific corpus. Situated in previous research, the results provide an opportunity to evaluate L2 writing within the theoretical framework of absolute syntactic complexity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100875"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141838360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-02DOI: 10.1016/j.asw.2024.100868
Huiying Cai , Xun Yan
Verbatim source use (VSU) in integrated argumentative writing tasks may enhance linguistic complexity of writing performance. This assistance might present an unequal advantage for test-takers across levels of writing proficiency, engendering validity and fairness concerns. While previous research has mostly examined the relationships between source use characteristics and proficiency levels, the relationship between VSU and linguistic complexity remains underexplored. To further unpack these relationships, this study examined both the direct impact of VSU on linguistic complexity of writing performances and its indirect impact through interaction with writing proficiency. Using natural language processing tools and techniques, we examined 34 linguistic complexity features and three VSU features of 3250 argumentative writing performances on a university-level English Placement Test (EPT). We performed exploratory factor analysis to identify linguistic complexity dimensions and applied mixed-effect models to examine how VSU features and proficiency level impacted these dimensions. Post-hoc analyses suggested weak direct impacts of different VSU features on linguistic complexity, which might reflect different essay writing strategies. However, no meaningful indirect impact was found. The findings help unravel the impact of VSU on argumentative writing and provide empirical evidence for validity arguments for integrated writing assessments.
{"title":"Examining the direct and indirect impacts of verbatim source use on linguistic complexity in integrated argumentative writing assessment","authors":"Huiying Cai , Xun Yan","doi":"10.1016/j.asw.2024.100868","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100868","url":null,"abstract":"<div><p>Verbatim source use (VSU) in integrated argumentative writing tasks may enhance linguistic complexity of writing performance. This assistance might present an unequal advantage for test-takers across levels of writing proficiency, engendering validity and fairness concerns. While previous research has mostly examined the relationships between source use characteristics and proficiency levels, the relationship between VSU and linguistic complexity remains underexplored. To further unpack these relationships, this study examined both the direct impact of VSU on linguistic complexity of writing performances and its indirect impact through interaction with writing proficiency. Using natural language processing tools and techniques, we examined 34 linguistic complexity features and three VSU features of 3250 argumentative writing performances on a university-level English Placement Test (EPT). We performed exploratory factor analysis to identify linguistic complexity dimensions and applied mixed-effect models to examine how VSU features and proficiency level impacted these dimensions. Post-hoc analyses suggested weak direct impacts of different VSU features on linguistic complexity, which might reflect different essay writing strategies. However, no meaningful indirect impact was found. The findings help unravel the impact of VSU on argumentative writing and provide empirical evidence for validity arguments for integrated writing assessments.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100868"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000618/pdfft?md5=faa5261e073280115613145d7ef0bb9e&pid=1-s2.0-S1075293524000618-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141543225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-31DOI: 10.1016/j.asw.2024.100849
Xiaozhu Wang, Jimin Wang
As writing is a complex language-producing process dependent on the writing environment and medium, the comparability of computer-based (CB) and paper-based (PB) writing assessments has been studied extensively since the emergence of computer-based language writing assessment. This study investigated the differences in the writing product and process between CB and PB modes of writing assessment in Chinese as a second language, of which the character writing system is considered challenging for learners. The many-facet Rasch model (MFRM) was adopted to reveal the text quality differences. Keystrokes and handwriting trace data were utilized to unveil insights into the writing process. The results showed that Chinese L2 learners generated higher-quality texts with fewer character mistakes in the CB mode. They revised much more, paused shorter and less frequently between lower-level linguistic units in the CB mode. The quality of CB text is associated with revision behavior, whereas pause duration serves as a stronger predictor of PB text quality. The findings suggest that the act of handwriting Chinese characters makes the construct of PB distinct from the CB writing assessment in L2 Chinese. Thus, the setting of the assessment mode should consider the target language use and the test taker’s characteristics.
{"title":"Comparing Chinese L2 writing performance in paper-based and computer-based modes: Perspectives from the writing product and process","authors":"Xiaozhu Wang, Jimin Wang","doi":"10.1016/j.asw.2024.100849","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100849","url":null,"abstract":"<div><p>As writing is a complex language-producing process dependent on the writing environment and medium, the comparability of computer-based (CB) and paper-based (PB) writing assessments has been studied extensively since the emergence of computer-based language writing assessment. This study investigated the differences in the writing product and process between CB and PB modes of writing assessment in Chinese as a second language, of which the character writing system is considered challenging for learners. The many-facet Rasch model (MFRM) was adopted to reveal the text quality differences. Keystrokes and handwriting trace data were utilized to unveil insights into the writing process. The results showed that Chinese L2 learners generated higher-quality texts with fewer character mistakes in the CB mode. They revised much more, paused shorter and less frequently between lower-level linguistic units in the CB mode. The quality of CB text is associated with revision behavior, whereas pause duration serves as a stronger predictor of PB text quality. The findings suggest that the act of handwriting Chinese characters makes the construct of PB distinct from the CB writing assessment in L2 Chinese. Thus, the setting of the assessment mode should consider the target language use and the test taker’s characteristics.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100849"},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-06-13DOI: 10.1016/j.asw.2024.100867
Sachiko Yasuda
The study examines the relationship between form-based complexity and meaning-based complexity in argumentative essays written by high school students learning English as a foreign language (EFL) in relation to writing quality. The data comprise argumentative essays written by 102 Japanese high school learners at different proficiency levels. The students’ proficiency levels were determined based on the evaluation of their argumentative essays by human raters using the GTEC rubric. The students’ essays were analyzed from multiple dimensions, focusing on both form-based complexity (lexical complexity, large-grained syntactic complexity, and fine-grained syntactic complexity features) and meaning-based complexity (argument quality). The results of the multidimensional analysis revealed that the most influential factor in determining overall essay scores was not form-based complexity but meaning-based complexity achieved through argument quality. Moreover, the results indicated that meaning-based complexity was strongly correlated with the use of complex nominals rather than clausal complexity. These insights have significant implications for both the teaching and assessment of argumentative essays among high school EFL learners, underscoring the importance of understanding what aspects of writing to prioritize and how best to assess student writing.
{"title":"Does “more complexity” equal “better writing”? Investigating the relationship between form-based complexity and meaning-based complexity in high school EFL learners’ argumentative writing","authors":"Sachiko Yasuda","doi":"10.1016/j.asw.2024.100867","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100867","url":null,"abstract":"<div><p>The study examines the relationship between form-based complexity and meaning-based complexity in argumentative essays written by high school students learning English as a foreign language (EFL) in relation to writing quality. The data comprise argumentative essays written by 102 Japanese high school learners at different proficiency levels. The students’ proficiency levels were determined based on the evaluation of their argumentative essays by human raters using the GTEC rubric. The students’ essays were analyzed from multiple dimensions, focusing on both form-based complexity (lexical complexity, large-grained syntactic complexity, and fine-grained syntactic complexity features) and meaning-based complexity (argument quality). The results of the multidimensional analysis revealed that the most influential factor in determining overall essay scores was not form-based complexity but meaning-based complexity achieved through argument quality. Moreover, the results indicated that meaning-based complexity was strongly correlated with the use of complex nominals rather than clausal complexity. These insights have significant implications for both the teaching and assessment of argumentative essays among high school EFL learners, underscoring the importance of understanding what aspects of writing to prioritize and how best to assess student writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100867"},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-06-07DOI: 10.1016/j.asw.2024.100862
Jihua Dong , Yanan Zhao , Louisa Buckingham
This study employs a bibliometric analysis to identify the research trends in the field of writing assessment over the last 30 years (1993–2022). Employing a dataset of 1,712 articles and 52,092 unique references, keyword co-occurrence analyses were used to identify prominent research topics, co-citation analyses were conducted to identify influential publications and journals, and a structural variation analysis was employed to identify transformative research in recent years. The results revealed the growing popularity of the writing assessment field, and the increasing diversity of research topics in the field. The research trends have become more associated with technology and cognitive and metacognitive processes. The influential publications indicate changes in research interest towards cross-disciplinary publications. The journals identified as key venues for writing assessment research also changed across the three decades. The latest transformative research points out possible future directions, including the integration of computational methods in writing assessment, and investigations into relationships between writing quality and various factors. This study contributes to our understanding of the development and future directions of writing assessment research, and has implications for researchers and practitioners.
{"title":"Thirty years of writing assessment: A bibliometric analysis of research trends and future directions","authors":"Jihua Dong , Yanan Zhao , Louisa Buckingham","doi":"10.1016/j.asw.2024.100862","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100862","url":null,"abstract":"<div><p>This study employs a bibliometric analysis to identify the research trends in the field of writing assessment over the last 30 years (1993–2022). Employing a dataset of 1,712 articles and 52,092 unique references, keyword co-occurrence analyses were used to identify prominent research topics, co-citation analyses were conducted to identify influential publications and journals, and a structural variation analysis was employed to identify transformative research in recent years. The results revealed the growing popularity of the writing assessment field, and the increasing diversity of research topics in the field. The research trends have become more associated with technology and cognitive and metacognitive processes. The influential publications indicate changes in research interest towards cross-disciplinary publications. The journals identified as key venues for writing assessment research also changed across the three decades. The latest transformative research points out possible future directions, including the integration of computational methods in writing assessment, and investigations into relationships between writing quality and various factors. This study contributes to our understanding of the development and future directions of writing assessment research, and has implications for researchers and practitioners.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100862"},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-22DOI: 10.1016/j.asw.2024.100872
Jack Pun , Sheng Tan , Xiang Li
The study examined the frequency and accuracy of four discourse competence (DC) parameters (i.e., establishment of a smooth old-to-new information flow, stance display, engagement with readers, and mastery of knowledge of discourse structure) in Hong Kong English-as-a-second-language (ESL) secondary students’ disciplinary research reports. The results showed that: (1) elements of establishment of a smooth old-to-new information flow constituted, on average, less than 9 % of the reports’ length, with mean accuracy rates ranging from 64.31 % to 80.33 %; (2) elements of stance display constituted, on average, no more than 3 % of the reports’ length, with mean accuracy rates ranging from 58.33 % to 98.17 %; (3) elements of engagement with readers constituted, on average, no more than 2 % of the reports’ length, with mean accuracy rates ranging from 33.33% to 88.64 %; (4) the mean accuracy rates of mastery of knowledge of discourse structure ranged from 0 % to 100 %. The findings reveal the secondary students’ DC in disciplinary research writing and pinpoint weak areas that instructors can work on. The proposed DC framework forms a foundation for future research on DC in discipline-specific research writing.
{"title":"Discourse competence in Hong Kong secondary students’ disciplinary research writing","authors":"Jack Pun , Sheng Tan , Xiang Li","doi":"10.1016/j.asw.2024.100872","DOIUrl":"10.1016/j.asw.2024.100872","url":null,"abstract":"<div><p>The study examined the frequency and accuracy of four discourse competence (DC) parameters (i.e., establishment of a smooth old-to-new information flow, stance display, engagement with readers, and mastery of knowledge of discourse structure) in Hong Kong English-as-a-second-language (ESL) secondary students’ disciplinary research reports. The results showed that: (1) elements of establishment of a smooth old-to-new information flow constituted, on average, less than 9 % of the reports’ length, with mean accuracy rates ranging from 64.31 % to 80.33 %; (2) elements of stance display constituted, on average, no more than 3 % of the reports’ length, with mean accuracy rates ranging from 58.33 % to 98.17 %; (3) elements of engagement with readers constituted, on average, no more than 2 % of the reports’ length, with mean accuracy rates ranging from 33.33% to 88.64 %; (4) the mean accuracy rates of mastery of knowledge of discourse structure ranged from 0 % to 100 %. The findings reveal the secondary students’ DC in disciplinary research writing and pinpoint weak areas that instructors can work on. The proposed DC framework forms a foundation for future research on DC in discipline-specific research writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100872"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141736645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}