While the educational field has made progress in comprehending student feedback literacy, its impact on feedback engagement and student writing performance remains insufficiently explored. Furthermore, the cross-linguistic perspective has not yet been introduced to the literature on student feedback literacy, even though this approach has seen increased utilization in both L1 and L2 learning research. The current study examined the relationship between L1 and L2 writing feedback literacies and how they may contribute to L2 feedback engagement and L2 writing performance. Data were collected from 231 English major sophomore students from a Chinese university. The structural equation modeling analyses results showed that students’ L1 writing feedback literacy had a positive effect on their L2 writing feedback literacy. Further, L1 writing feedback literacy exerted an indirect effect on L2 writing performance via L2 writing feedback literacy and L2 feedback engagement. These findings underscore the pivotal role of L1 writing feedback literacy in L2 development and provide empirical evidence elucidating the close relationship between student feedback literacy and feedback engagement. The study concludes with pedagogical suggestions based on the observed outcomes.
Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.
This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of Duolingo English Test (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.
In this study, we conceptualize two approaches, model-based and text-based, grounded on mental models and discourse comprehension theories, to computerized summary analysis. We juxtapose the model-based approach with the text-based approach to explore shared knowledge dimensions and associated measures from both approaches and use them to examine changes in students' summaries over time. We used 108 cases in which we computed model-based and text-based measures for two versions of students' summaries (i.e., initial and final revisions), resulting in a total of 216 observations. We used correlations, Principal Components Analysis (PCA), and Linear Mixed-Effects models. This exploratory investigation suggested a shortlist of text-based measures, and the findings of the PCA demonstrated that both model-based and text-based measures explained the three-dimensional model (i.e., surface, structure, and semantic). Overall, model-based measures were better for tracking changes in the surface dimension, while text-based measures were descriptive of the structure dimension. Both approaches worked well for the semantic dimension. The tested text-based measures can serve as a cross-reference to evaluate students' summaries along with the model-based measures. The current study shows the potential of using multidimensional measures to provide formative feedback on students' knowledge structure and writing styles along the three dimensions.

