Though many studies have been conducted on written language task complexity, much remains to be explored about its relationship with writers’ fluency. In this study, we examined the potential impact of cognitive task complexity on L1 and L2 learners’ writing and spelling fluency, and the writing fluency and spelling differences between two L2 learner groups when performing tasks of varied complexity. Ninety university students– 30 native English speakers, and 30 Hispanic and 30 Chinese English learners–participated in the study. We recorded the process performance of the three learner groups using computer keystroke logging software– Inputlog 7.0– while they completed two argumentative tasks of varied cognitive complexity. The participants’ writing performance was analyzed using multiple fluency measures. The results generally showed that task complexity plays an influential role in differentiating the participants’ writing fluency, and that task complexity interacts with the three learner groups’ writing and spelling fluency performance. This offers invaluable insights that can be harnessed by technology developers to tailor their tools, furnishing learners with the means to adeptly navigate the intricate terrain of task-induced fluency challenges. Such nuanced understanding holds the potential to refine technological solutions, ultimately empowering proficient TBLT-driven L2 writers.
This study inspires new pedagogical practices with evolving technological innovations. One example of such innovation is the emergence of artificial intelligence (AI) in education. The potential impact of generative AI, such as ChatGPT, on composition education has caused concerns among educators due to its human-like writing capabilities. However, there is no escape from ChatGPT-generated text, which is influenced by prompt engineering. Such engineering can lead to underlying issues in content, prompting a pedagogical opportunity for understanding human-written and AI-generated texts. Since, to this date, there is no single reliable source for identifying AI-generated text, this study introduces a pedagogical approach, DETECT, with two major goals: (1) explore the nuances that differentiate human expression from the algorithmic patterns and tendencies of generative AI writing and (2) inspire ways to integrate generative AI in composition instruction in a post-plagiarism era. Using exploratory practice research, this article examines DETECT in composition instruction of 32 students during Fall 2023 and Spring 2024. The findings showed that using DETECT improved students’ confidence in analyzing human-written and AI-generated texts, which enhanced their recognition and appreciation of their own writing voice. The study concludes with pedagogical implications for the possibilities of generative AI in writing instruction.
Revising Marxist theories of circulation with affect theory, this article establishes a new model of rhetorical analysis that positions rhetorical exchange as a circulatory infrastructure of late capitalism. By measuring the value produced by rhetors and audiences in rhetorical exchange, we can see how the daily rhetorical activity of neoliberal subjects captures our behavior, positioning us a raw material for late capitalists. This new theory of rhetorical circulation is tested and revised by a qualitative study on the mundane communication of neoliberal subjects, in this case, the group chat of one fantasy football league. Fantasy football communication creates an ambient backdrop for its users, leading to quotidian rhetorical exchanges in clearly defined social networks. The study shows the contours of rhetorical exchange in one league's GroupMe chat. I found that, in exchange, subjects transform their investments into social and cultural capital (Bourdieu's capital forms). Ultimately, subjects can produce what I call affective capital, a uniquely neoliberal capital form. I find that the immense value of affective capital produced by league members in rhetorical exchange points to the reasons why neoliberal subjects repeatedly return to platforms that harvest our data.
In writing, rhetoric, and composition studies, researchers have examined hashtags through their collectivizing, signifying, and disrupting qualities. In this piece, we propose that hashtags can also be deployed in ways that are distortive, meaning that individuals and communities can rhetorically implement hashtags that may appear illegible to outsiders while still being meaningful to those within the group. Specifically, we carry this conversation through the context of hashtags deployed by members of the pro-anorexia (pro-ana) community/ies on the microblogging site Tumblr. While researchers in health and medical fields have found it useful to turn to studying aggregable hashtag data to make recommendations for working with at-risk populations such as these, problems can arise when these communities use distorted hashtags to avoid algorithmic detection/aggregation processes. To illustrate, collecting data from hashtags such as #anorexic may not yield useful information when members of this population might use tags such as #anar3cic to communicate with one another. Thus, we suggest that researchers of digital rhetorics and in rhetorics of health and medicine pay closer attention to the affordances of distortions, rather than dismissing them as irrelevant to larger narratives of clarity. We also end with ethical considerations that arise from focusing on the rhetorical distortions of at-risk populations.
This article considers the influence of algorithmic attention systems on writing and human interaction within digital environments. Building on existing scholarship in writing studies on rhetorical strategies and critical literacies for algorithmic contexts, the author discusses how algorithmic attention systems and the conventions of writing-as-content reinforce a particular view of human attention. This approach to attention is especially problematic within forms of social media that involve interpersonal communication. The article discusses the possible future directions for writing teachers and researchers to continue conversations about the ethics of algorithmic attention systems and alternatives to thinking of writing as content.
This paper examines ChatGPT's use of evaluative language and engagement strategies while addressing information-seeking queries. It assesses the chatbot's role as a virtual teaching assistant (VTA) across various educational settings. By employing Appraisal theory, the analysis contrasts responses generated by ChatGPT and those added by humans, focusing on the interactants’ attitude, deployment of interpersonal metaphors and evaluations of entities, revealing their views on Australian cultural practice. Two datasets were analysed: the first sample (15,909 words) was retrieved from the subreddit r/AskAnAustralian and the second (10,696 words) was obtained by prompting ChatGPT with the same questions. The findings show that, while human experts mainly opt for subjective explicit formulations to express personal viewpoints, the chatbot's preference goes out to incongruent ‘it is’-constructions to share pre-programmed perspectives, which may reflect ideological bias. Even though ChatGPT displays promising socio-communicative capabilities (SCs), its lack of contextual awareness, required to function cross-culturally as a VTA, may lead to considerable ethical issues. The study's novel contribution lies in the in-depth investigation of how the chatbot's SCs and lexicogrammatical selections may impact its role as a VTA, highlighting the need to develop students’ critical digital literacy skills while using AI learning tools.
This study explores EFL graduate students’ attitudes toward and emotions about the teacher and peer feedback using a questionnaire and interviews. Throughout their advanced writing course, students were provided with feedback via Google Docs, which they then systematically arranged within their Google Drive-based e-portfolios. The interview findings confirmed that students had mixed attitudes and emotions toward e-portfolios. They expressed negativity toward the timing of peer feedback reception and its validity. However, they showed a positive attitude toward teacher feedback and the importance of maintaining the integrity of a Google Drive portfolio. Emotionally, the students felt frustrated due to feedback focus, anxious while providing feedback, confused by ambiguous comments by peers, and embarrassed by their errors. However, their confidence was bolstered by their adeptness with technology and the interactive functions of Google Docs served as a motivating factor. Additionally, the t-test results comparing pre-test and post-test questionnaires showed statistical significance, indicating that students improved their compositions’ content, organization, and language with the assistance of peer and teacher feedback. The study underscores the importance of integrating technology into writing instruction, considering students’ emotions and attitudes. Thus, ongoing efforts are vital to develop the technological skills of both teachers and students.

