In writing, rhetoric, and composition studies, researchers have examined hashtags through their collectivizing, signifying, and disrupting qualities. In this piece, we propose that hashtags can also be deployed in ways that are distortive, meaning that individuals and communities can rhetorically implement hashtags that may appear illegible to outsiders while still being meaningful to those within the group. Specifically, we carry this conversation through the context of hashtags deployed by members of the pro-anorexia (pro-ana) community/ies on the microblogging site Tumblr. While researchers in health and medical fields have found it useful to turn to studying aggregable hashtag data to make recommendations for working with at-risk populations such as these, problems can arise when these communities use distorted hashtags to avoid algorithmic detection/aggregation processes. To illustrate, collecting data from hashtags such as #anorexic may not yield useful information when members of this population might use tags such as #anar3cic to communicate with one another. Thus, we suggest that researchers of digital rhetorics and in rhetorics of health and medicine pay closer attention to the affordances of distortions, rather than dismissing them as irrelevant to larger narratives of clarity. We also end with ethical considerations that arise from focusing on the rhetorical distortions of at-risk populations.
This study explores EFL graduate students’ attitudes toward and emotions about the teacher and peer feedback using a questionnaire and interviews. Throughout their advanced writing course, students were provided with feedback via Google Docs, which they then systematically arranged within their Google Drive-based e-portfolios. The interview findings confirmed that students had mixed attitudes and emotions toward e-portfolios. They expressed negativity toward the timing of peer feedback reception and its validity. However, they showed a positive attitude toward teacher feedback and the importance of maintaining the integrity of a Google Drive portfolio. Emotionally, the students felt frustrated due to feedback focus, anxious while providing feedback, confused by ambiguous comments by peers, and embarrassed by their errors. However, their confidence was bolstered by their adeptness with technology and the interactive functions of Google Docs served as a motivating factor. Additionally, the t-test results comparing pre-test and post-test questionnaires showed statistical significance, indicating that students improved their compositions’ content, organization, and language with the assistance of peer and teacher feedback. The study underscores the importance of integrating technology into writing instruction, considering students’ emotions and attitudes. Thus, ongoing efforts are vital to develop the technological skills of both teachers and students.
This paper examines ChatGPT's use of evaluative language and engagement strategies while addressing information-seeking queries. It assesses the chatbot's role as a virtual teaching assistant (VTA) across various educational settings. By employing Appraisal theory, the analysis contrasts responses generated by ChatGPT and those added by humans, focusing on the interactants’ attitude, deployment of interpersonal metaphors and evaluations of entities, revealing their views on Australian cultural practice. Two datasets were analysed: the first sample (15,909 words) was retrieved from the subreddit r/AskAnAustralian and the second (10,696 words) was obtained by prompting ChatGPT with the same questions. The findings show that, while human experts mainly opt for subjective explicit formulations to express personal viewpoints, the chatbot's preference goes out to incongruent ‘it is’-constructions to share pre-programmed perspectives, which may reflect ideological bias. Even though ChatGPT displays promising socio-communicative capabilities (SCs), its lack of contextual awareness, required to function cross-culturally as a VTA, may lead to considerable ethical issues. The study's novel contribution lies in the in-depth investigation of how the chatbot's SCs and lexicogrammatical selections may impact its role as a VTA, highlighting the need to develop students’ critical digital literacy skills while using AI learning tools.