Researchers in second language (L2) writing studies are increasingly focusing on examining complex noun phrases (NPs). However, recent studies on NP complexity show a preference for examining advanced learners’ writings, despite the fact that English writings of early L2 learners already contain many NPs. In the present study, we used a corpus-based approach to investigate the development of NP complexity in argumentative and narrative compositions written by English as a foreign language (EFL) learners with different proficiency levels. The results show that eight NP complexity features presented patterns of growth at different proficiency levels. Among the eight features, attributive adjectives and -ing participles as post-modifiers can both reflect the development and characteristics of Chinese EFL learners’ writings. We also found that genre effect on NP complexity growth was the result of both task-related factors of genres and learners’ genre exposure. Our results largely corroborate the developmental index proposed by Biber et al. (2011), and confirm that NP complexity starts to grow from early stages of learning among L2 English learners with genre-specific features.
The present study employed a qualitative research design to investigate possible differences between L2 master’s and doctoral students’ preferences for supervisor written feedback. Although the role of learners’ preferences, as a part of attitudinal engagement, has been emphasized in the literature on feedback, there are still niches in the literature that need to be occupied. One of these gaps is the examination of L2 master’s and doctoral students’ preferences for supervisor written feedback on their theses/dissertations. To bridge this research gap, the researcher interviewed 52 master’s and 21 doctoral Iranian English Language Teaching students. Thematic analysis of the interview data identified five main preferences: feedback that is clear, specific, encouraging, dialogic, and non-appropriative. The examination of interview data showed that both master’s and doctoral students expressed high levels of preference for receiving clear and encouraging feedback. A significantly higher percentage of master’s students expressed their preference for specific comments. In contrast, doctoral students exhibited heightened preferences for non-appropriative and dialogic feedback. The findings also provided insights into the underlying factors that can shape master’s and doctoral students’ preferences. Several practical implications and suggestions for further research are also discussed in this study.
While the educational field has made progress in comprehending student feedback literacy, its impact on feedback engagement and student writing performance remains insufficiently explored. Furthermore, the cross-linguistic perspective has not yet been introduced to the literature on student feedback literacy, even though this approach has seen increased utilization in both L1 and L2 learning research. The current study examined the relationship between L1 and L2 writing feedback literacies and how they may contribute to L2 feedback engagement and L2 writing performance. Data were collected from 231 English major sophomore students from a Chinese university. The structural equation modeling analyses results showed that students’ L1 writing feedback literacy had a positive effect on their L2 writing feedback literacy. Further, L1 writing feedback literacy exerted an indirect effect on L2 writing performance via L2 writing feedback literacy and L2 feedback engagement. These findings underscore the pivotal role of L1 writing feedback literacy in L2 development and provide empirical evidence elucidating the close relationship between student feedback literacy and feedback engagement. The study concludes with pedagogical suggestions based on the observed outcomes.
In this study, we conceptualize two approaches, model-based and text-based, grounded on mental models and discourse comprehension theories, to computerized summary analysis. We juxtapose the model-based approach with the text-based approach to explore shared knowledge dimensions and associated measures from both approaches and use them to examine changes in students' summaries over time. We used 108 cases in which we computed model-based and text-based measures for two versions of students' summaries (i.e., initial and final revisions), resulting in a total of 216 observations. We used correlations, Principal Components Analysis (PCA), and Linear Mixed-Effects models. This exploratory investigation suggested a shortlist of text-based measures, and the findings of the PCA demonstrated that both model-based and text-based measures explained the three-dimensional model (i.e., surface, structure, and semantic). Overall, model-based measures were better for tracking changes in the surface dimension, while text-based measures were descriptive of the structure dimension. Both approaches worked well for the semantic dimension. The tested text-based measures can serve as a cross-reference to evaluate students' summaries along with the model-based measures. The current study shows the potential of using multidimensional measures to provide formative feedback on students' knowledge structure and writing styles along the three dimensions.
Technology facilitates teacher corrective feedback on students' writing, but there is a need to examine how written, audio and screencast modes affect teacher's evaluative language of electronic (e-) feedback from linguistic approaches. By using the engagement resources of the appraisal framework within Systemic Functional Linguistics, this study examined the effect of written, audio and screencast modes on the instructor's evaluative language in his e-feedback on writing and the text revisions of 15 pairs of Saudi EFL learners. The linguistic analysis of the e-feedback revealed that the instructor's engagement resources differed across the three e-feedback modes. Specifically, the screencast and audio e-feedback modes were dominated by expanding resources (resources expanding the space for dialogue) as opposed to the prevalence of contracting resources (resources limiting/shutting down the space for dialogue) in the written feedback mode. Moreover, the audio and screencast feedback modes contained more statements and suggestions whereas the written feedback mode was dominated by commands/orders and suggested corrections. The content analysis revealed that the screencast e-feedback mode addressed a higher number of global issues in writing; however, the audio and written e-feedback modes addressed a higher number of local issues in writing. Despite the higher overall rate of successful text revisions resulting from the screencast and audio e-feedback modes, no significant differences were found except in relation to students' global text revisions. The study offers useful pedagogical implications for instructors in effectively responding to students' writing.